Tag: Hashicorp Blog

Canary deployments with Consul Service Mesh

Canary deployments with Consul Service Mesh

This is the fourth post of the blog series highlighting new features in Consul service mesh.

Last month at HashiConf EU we announced Consul 1.6.0. This release delivers a set of new Layer 7 traffic management capabilities including L7 traffic splitting, which enables canary service deployments.

This blog post will walk you through the steps necessary to split traffic between two upstream services.

Canary Deployments

A Canary deployment is a technique for deploying a new version of a service, while avoiding downtime. During a canary deployment you shift a small percentage of traffic to a new version of a service while monitoring its behavior. Initially you send the smallest amount of traffic possible to the new service while still generating meaningful performance data. As you gain confidence in the new version you slowly increase the proportion of traffic it handles. Eventually, the canary version handles 100% of all traffic, at which point the old version can be completely deprecated and then removed from the environment.

The new version of the service is called the canary version, as a reference to the “canary in a coal mine”.

To determine the correct function of the new service, you must have observability into your application. Metrics and tracing data will allow you to determine that the new version is working as expected and not throwing errors. In contrast to Blue/Green deployments, which involve transitioning to a new version of a service in a single step, Canary deployments take a more gradual approach, which helps you guard against service errors that only manifest themselves with a particular load.

Prerequisites

The steps in this guide use Consul’s service mesh feature, Consul Connect. If you aren’t already familiar with it you can learn more by following this guide.

We created a demo environment for the steps we describe here. The environment relies on Docker and Docker Compose. If you do not already have Docker and Docker Compose, you can install them from docker’s install page.

Environment

The demo architecture you’ll use consists of 3 services, a public Web service, two versions of the API service, and a Consul server. The services make up a two-tier application; the Web service accepts incoming traffic and makes an upstream call to API service. You’ll imagine that version 1 of the API service is already running in production and handling traffic, and that version 2 contains some changes you want to ship in a canary deployment.
Consul Traffic splitting

To deploy version 2 of your API service, you will:
1. Start an instance of the v2 API service in your production environment.
2. Set up a traffic split to make sure v2 doesn’t receive any traffic at first.
3. Register v2 so that Consul can send traffic to it.
4. Slowly shift traffic to v2 and a way from v1 until the new version is handling all the traffic.

Starting the Demo Environment

First clone the repo containing the source and examples for this blog post.
shell
$ git clone [email protected]:hashicorp/consul-demo-traffic-splitting.git

Change directories into the cloned folder, and start the demo environment with docker-compose up. This command will run in the foreground, so you’ll need to open a new terminal window after you run it.

“`shell
$ docker-compose up

Creating consul-demo-traffic-splittingapiv11 … done
Creating consul-demo-traffic-splitting
consul1 … done
Creating consul-demo-traffic-splitting
web1 … done
Creating consul-demo-traffic-splitting
webenvoy1 … done
Creating consul-demo-traffic-splittingapiproxyv11 … done
Attaching to consul-demo-traffic-splittingconsul1, consul-demo-traffic-splittingweb1, consul-demo-traffic-splittingapiv11, consul-demo-traffic-splittingwebenvoy1, consul-demo-traffic-splittingapiproxyv11
“`

The following services will automatically start in your local Docker environment and register with Consul.
Consul Server
Web service with Envoy sidecar
API service version 1 with Envoy sidecar

You can see Consul’s configuration in the consul_config folder, and the service definitions in the service_config folder.

Once everything is up and running, you can view the health of the registered services by looking at the Consul UI at http://localhost:8500. All services should be passing their health checks.

Curl the Web endpoint to make sure that the whole application is running. You will see that the Web service gets a response from version 1 of the API service.

“`shell
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V1%
“`

Initially, you will want to deploy version 2 of the API service to production without sending any traffic to it, to make sure that it performs well in a new environment. Prevent traffic from flowing to version 2 when you register it, you will preemptively set up a traffic split to send 100% of your traffic to version 1 of the API service, and 0% to the not-yet-deployed version 2. Splitting the traffic makes use of the new Layer 7 features built into Consul Service Mesh.

Configuring Traffic Splitting

Traffic Splitting uses configuration entries (introduced in Consul 1.5 and 1.6) to centrally configure the services and Envoy proxies. There are three configuration entries you need to create to enable traffic splitting:
Service Defaults for the API service to set the protocol to HTTP.
Service Splitter which defines the traffic split between the service subsets.
Service Resolver which defines which service instances are version 1 and 2.

Configuring Service Defaults

Traffic splitting requires that the upstream application uses HTTP, because splitting happens on layer 7 (on a request by request basis). You will tell Consul that your upstream service uses HTTP by setting the protocol in a “service defaults” configuration entry for the API service. This configuration is already in your demo environment at l7_config/api_service_defaults.json. It looks like this.

json
{
"kind": "service-defaults",
"name": "api",
"protocol": "http"
}

The kind field denotes the type of configuration entry which you are defining; for this example, service-defaults. The name field defines which service the service-defaults configuration entry applies to. (The value of this field must match the name of a service registered in Consul, in this example, api.) The protocol is http.

To apply the configuration, you can either use the Consul CLI or the API. In this example we’ll use the configuration entry endpoint of the HTTP API, which is available at http://localhost:8500/v1/config. To apply the config, use a PUT operation in the following command.

shell
$ curl localhost:8500/v1/config -XPUT -d @l7_config/api_service_defaults.json
true%

For more information on service-defaults configuration entries, see the documentation

Configuring the Service Resolver

The next configuration entry you need to add is the Service Resolver, which allows you to define how Consul’s service discovery selects service instances for a given service name.

Service Resolvers allow you to filter for subsets of services based on information in the service registration. In this example, we are going to define the subsets “v1” and “v2” for the API service, based on its registered metadata. API service version 1 in the demo is already registered with the tags v1 and service metadata version:1. When you register version 2 you will give it the tag v2 and the metadata version:2. The name field is set to the name of the service in the Consul service catalog.

The service resolver is already in your demo environment at l7_config/api_service_resolver.json and it looks like this.

“`json
{
"kind": "service-resolver",
"name": "api",

"subsets": {
"v1": {
"filter": "Service.Meta.version == 1"
},
"v2": {
"filter": "Service.Meta.version == 2"
}
}
}
“`

Apply the service resolver configuration entry using the same method you used in the previous example.

shell
$ curl localhost:8500/v1/config -XPUT -d @l7_config/api_service_resolver.json
true%

For more information about service resolvers see the documentation.

Configure Service Splitting – 100% of traffic to Version 1

Next, you’ll create a configuration entry that will split percentages of traffic to the subsets of your upstream service that you just defined. Initially, you want the splitter to send all traffic to v1 of your upstream service, which prevents any traffic from being sent to v2 when you register it. In a production scenario, this would give you time to make sure that v2 of your service is up and running as expected before sending it any real traffic.

The configuration entry for Service Splitting is of kind of service-splitter. Its name specifies which service that the splitter will act on. The splits field takes an array which defines the different splits; in this example, there are only two splits; however, it is possible to configure more complex scenarios. Each split has a weight which defines the percentage of traffic to distribute to each service subset. The total weights for all splits must equal 100. For our initial split, we are going to configure all traffic to be directed to the service subset v1.

The service splitter configuration already exists in your demo environment at l7_config/api_service_splitter_100_0.json and looks like this.

json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 100,
"service_subset": "v1"
},
{
"weight": 0,
"service_subset": "v2"
}
]
}

Apply this configuration entry by issuing another PUT request to the Consul’s configuration entry endpoint of the HTTP API.

shell
$ curl localhost:8500/v1/config -XPUT -d @l7_config/api_service_splitter_100_0.json
true%

This scenario is the first stage in our Canary deployment; you can now launch the new version of your service without it immediately being used by the upstream load balancing group.

Start and Register API Service Version 2

Next you’ll start the canary version of the API service (version 2), and register it with the settings that you used in the configuration entries for resolution and splitting. Start the service, register it, and start its connect sidecar with the following command. This command will run in the foreground, so you’ll need to open a new terminal window after you run it.

shell
$ docker-compose -f docker-compose-v2.yml up

Check that the service and its proxy have registered by looking for a new v2 tags next to the API service and API sidecar proxies in the Consul UI.

Configure Service Splitting – 50% Version 1, 50% Version 2

Now that version 2 is running and registered, the next step is to gradually increase traffic to it by changing the weight of the v2 service subset in the service splitter configuration. Let’s increase the weight of the v2 service to 50%. Remember; total service weight must equal 100, so you also reduce the weight of the v1 subset to 50. The configuration file is already in your demo environment at l7_config/api_service_splitter_50_50.json and it looks like this.

json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 50,
"service_subset": "v1"
},
{
"weight": 50,
"service_subset": "v2"
}
]
}

Apply the configuration as before.

shell
$ curl localhost:8500/v1/config -XPUT -d @l7_config/api_service_splitter_50_50.json
true%

Now that you’ve increased the percentage of traffic to v2, curl the web service again. You will see traffic equally distributed across both of the service subsets.

“`shell
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V1%
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V2%
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V1%
“`

If you were actually performing a canary deployment you would want to choose a much smaller percentage for your initial split: the smallest possible percentage that would give you reliable data on service performance. You would then slowly increase the percentage by iterating over this step as you gained confidence in version 2 of your service. Some companies may eventually choose to automate the ramp up based on preset performance thresholds.

Configure Service Splitting – 100% Version 2

Once you are confident that the new version of the service is operating correctly, you can send 100% of traffic to the version 2 subset. The configuration for a 100% split to version 2 looks like this.

json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 0,
"service_subset": "v1"
},
{
"weight": 100,
"service_subset": "v2"
}
]
}

Apply it with a call to the HTTP API config endpoint as you did before.

shell
$ curl localhost:8500/v1/config -XPUT -d @l7_config/api_service_splitter_0_100.json
true%

Now when you curl the web service again. 100% of traffic is sent to the version 2 subset.

“`shell
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V2%
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V2%
$ curl localhost:9090
Hello World

Upstream Data: localhost:9091

Service V2%
“`

Typically in a production environment, you would now remove the version 1 service to release capacity in your cluster. Congratulations, you’ve now completed the deployment of version 2 of your service.

Clean up

To stop and remove the containers and networks that you created you will run docker-compose down twice: once for each of the docker compose commands you ran. Because containers you created in the second compose command are running on the network you created in the first command, you will need to bring down the environments in the opposite order that you created them in.

First you’ll stop and remove the containers created for v2 of the API service.

shell
$ docker-compose -f docker-compose-v2.yml down
Stopping consul-demo-traffic-splitting_api_proxy_v2_1 ... done
Stopping consul-demo-traffic-splitting_api_v2_1 ... done
WARNING: Found orphan containers (consul-demo-traffic-splitting_api_proxy_v1_1, consul-demo-traffic-splitting_web_envoy_1, consul-demo-traffic-splitting_consul_1, consul-demo-traffic-splitting_web_1, consul-demo-traffic-splitting_api_v1_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing consul-demo-traffic-splitting_api_proxy_v2_1 ... done
Removing consul-demo-traffic-splitting_api_v2_1 ... done
Network consul-demo-traffic-splitting_vpcbr is external, skipping

Then, you’ll stop and remove the containers and the network that you created in the first docker compose command.

shell
$ docker-compose down
Stopping consul-demo-traffic-splitting_api_proxy_v1_1 ... done
Stopping consul-demo-traffic-splitting_web_envoy_1 ... done
Stopping consul-demo-traffic-splitting_consul_1 ... done
Stopping consul-demo-traffic-splitting_web_1 ... done
Stopping consul-demo-traffic-splitting_api_v1_1 ... done
Removing consul-demo-traffic-splitting_api_proxy_v1_1 ... done
Removing consul-demo-traffic-splitting_web_envoy_1 ... done
Removing consul-demo-traffic-splitting_consul_1 ... done
Removing consul-demo-traffic-splitting_web_1 ... done
Removing consul-demo-traffic-splitting_api_v1_1 ... done
Removing network consul-demo-traffic-splitting_vpcbr

Summary

In this blog, we walked you through the steps required to perform Canary deployments using traffic splitting and resolution. For more in-depth information on Canary deployments, Danilo Sato has written an excellent article on Martin Fowler's website.

The advanced L7 traffic management in 1.6.0 is not limited to splitting. It also includes HTTP based routing and new settings for service resolution. In combination, these features enable sophisticated traffic routing and service failover. All the new L7 traffic management settings can be found in the documentation. If you’d like to go farther, combine it with our guide on and L7 Observability to implement some of the monitoring needed for new service deployments.

Please keep in mind that Consul 1.6 RC isn’t suited for production deployments. We’d appreciate any feedback or bug reports you have in our GitHub issues, and you’re welcome to ask questions in our new community forum.

from Hashicorp Blog

Vault Learning Resources: Vault 1.2 Feature Introduction

Vault Learning Resources: Vault 1.2 Feature Introduction

With the recent release of Vault 1.2, we are excited to introduce several new 1.2 feature guides on HashiCorp Learn to help you understand how they work.

What's New?

Database Static Roles and Credential Rotation

Now Vault's database secrets engine can manage existing database credentials. This allows users to delegate the task of periodic password rotation to Vault.

This guide walks you through the steps to define a static role and configure its password rotation cycle.

KMIP Secrets Engine

NOTE: KMIP secrets engine is a Vault Enterprise feature.

Vault server can now serve as a KMIP (Key Management Interoperability Protocol) server.

This guide walks through the steps to enable and configure the KMIP secrets engine.

Vault HA Cluster with Integrated Storage

NOTE: Vault's Integrated Storage is a Technology Preview feature and not suitable for deployment in production.

Vault's integrated storage provides an option to use the persistent storage directly built into Vault which makes the operational tasks simpler. If Vault encounters an outage, Vault is the only product you need to diagnose.

This guide demonstrates the deployment of a Vault cluster using the integrated storage.

Download Vault 1.2 today and explore those new features!

from Hashicorp Blog

Announcing the HashiCorp Vault Helm Chart

Announcing the HashiCorp Vault Helm Chart

This week we're releasing an official Helm Chart for Vault. Using the Helm Chart, you can start a Vault cluster running on Kubernetes in just minutes. This Helm chart will also be the primary mechanism for setting up future roadmapped Vault and Kubernetes features. By using the Helm chart, you can greatly reduce the complexity of running Vault on Kubernetes, and it gives you a repeatable deployment process in less time (vs rolling your own).

The Helm chart will initially support installing and updating the open-source version of Vault in three distinct modes: Single Server, Highly-Available (HA), and Dev mode. We are actively working on a version for Vault Enterprise and it will be available in the future. The Helm chart allows you to run Vault directly on Kubernetes, so in addition to the native integrations provided by Vault itself, any other tool built for Kubernetes can choose to leverage Vault.

Here are a few common use-cases for running Vault on Kubernetes:

  • Running Vault as a Shared Service: The Vault server cluster runs directly on Kubernetes. This can be used by applications running within Kubernetes as well as external to Kubernetes, as long as they can communicate to the server via the network.
  • Accessing and Storing Secrets: Applications using the Vault service running in Kubernetes can access and store secrets from Vault using a number of different Secret Engines and Authentication Methods.
  • Running a Highly Available Vault Service: By using pod affinities, highly available backend storage (such as Consul) and Auto Unseal, Vault can become a highly available service in Kubernetes.
  • Encryption as a Service: Applications using the Vault service running in Kubernetes can leverage the Transit Secrets Engine as "encryption as a service". This allows applications to offload encryption needs to Vault before storage data at rest.
  • Audit Logs for Vault: Operators can choose to attach a persistent volume to the Vault cluster which can be used to store audit logs.

Video

To learn more about the Vault Helm chart, watch the video below or scroll down to read more. The video below shows the Helm chart being used to install and configure Vault in each of the supported modes (dev. single server, and highly-available) on a 5 node Kubernetes cluster.

Running a Vault Cluster

To use the Helm chart, you must download or clone the hashicorp/vault-helm GitHub repository and run Helm against the directory. When running Helm, we highly recommend you always checkout a specific tagged release of the chart to avoid any instabilities from master.

After downloading the repository, please check out a tagged release:

“`

Clone the repo

$ git clone https://github.com/hashicorp/vault-helm.git
$ cd vault-helm

checkout a tagged version

$ git checkout v0.1.0
“`

As mentioned earlier, the Helm chart supports three distinct modes: Single Server, Highly-Available (HA), and Dev mode. The default when using this Chart without options, is running Vault in Single Server mode, where it will provision a Volume for you and use that to store your data.

Then, install the chart:

“`

Install the chart

$ helm install –name=vault .
“`

In just a minute, you'll have a standalone Vault pod deployed on Kubernetes. However, the Vault still needs to be initialized and unsealed, and we can verify that by checking the status.

“`

Check status

$ kubectl exec -it vault-0 — vault status
“`

So, let’s initialize the Vault instance.

“`

Initialize

$ kubectl exec -it vault-0 — vault operator init -n 1 -t 1
“`

Finally, let’s unseal the vault so we can use it.

“`

Unseal vault

$ kubectl exec -it vault-0 — vault operator unseal
“`

Note that if we would have used the Dev mode to install the helm chart, the Vault instance would be automatically initialized, unsealed and using in-memory storage.

“`

Alternative – install the chart in dev mode

$ helm install –name=vault –set='server.dev.enabled=true' .
“`

You now have a fully functional Single Server Vault service sitting in your Kubernetes cluster and you can start routing traffic to it. There is an obvious problem here, where we are using a single Pod instance and file backed storage, and this is a single point of failure. This is where the Highly-Available (HA) mode comes into play. You can read more about HA mode via the configuration options available in the values.yaml file found in the hashicorp/vault-helm GitHub repository, or through the Vault Helm Chart documentation.

Next Steps

The Helm chart is available now on GitHub. We also plan to transition to using a real Helm repository soon. To learn more, see the Helm Chart documentation or the documentation on Running Vault on Kubernetes. Also, if you enjoy playing around with this type of stuff, maybe you’d be interested in working at Hashicorp too since we’re hiring!

from Hashicorp Blog

Video Recordings From HashiConf EU 2019: Keynotes and Breakout Sessions

Video Recordings From HashiConf EU 2019: Keynotes and Breakout Sessions

2019 saw the return of HashiConf EU, a three-day, two-track European conference complete with training, talks, and major product announcements. This video provides a glimpse of what the experience was like in Amsterdam at the beginning of last month:

The conference had nearly 800 attendees—almost double the number of attendees from HashiDays Amsterdam in 2018. We also built out an expanded keynote hall in this year’s venue. You can watch the venue come together in this time-lapse setup and conference video. Of course, more people also meant more caffeine: we served around 2,100 espresso drinks over the three days.

Most importantly, HashiConf EU was built around the content. Every year we're inspired by the diversity of use cases for HashiCorp tools and the number of companies using them. We had more talks submitted during our call for papers this year than ever before. We’re also excited to share these sessions with you both on-site and via video recordings. Today, we are pleased to share 25 different videos of our HashiConf EU keynotes and breakout sessions.

A number of the videos listed below include edited transcripts, and we have more to come. You can also visit this link to the HashiCorp Resource Library to filter talks by product, case studies, demos, and more.

Day One

Day Two

If you liked these presentations but are looking to participate in the face-to-face discussions and in-person networking that happen at HashiCorp conferences, don’t fret. There are still tickets available for HashiCorp’s flagship community conference, HashiConf, taking place this September in Seattle. More information here.

from Hashicorp Blog

Announcing HashiCorp Vault 1.2

Announcing HashiCorp Vault 1.2

We are excited to announce the public availability of HashiCorp Vault 1.2. Vault is a tool to provide secrets management, data encryption, and identity management for any infrastructure and application.

Vault 1.2 is focused on supporting new architectures for automated credential and cryptographic key management at a global, highly-distributed scale. This release introduces new mechanisms for users and applications to manage sensitive data such as cryptographic keys and database accounts, and exposes new interfaces that improve Vault’s ability to automate secrets management, encryption as a service, and privileged access management.

  • KMIP Server Secret Engine (Vault Enterprise only): Allow Vault to serve as a KMIP Server for automating secrets management and encryption as a service workflows with enterprise systems.
  • Integrated Storage (tech preview): Manage Vault’s secure storage of persistent data without an external storage backend, supporting High Availability and Replication.
  • Identity Tokens: Produce OIDC-compliant JWT tokens tied to Vault identities for use in third-party systems.
  • Database Static Credential Rotation: Automate the rotation of pre-existing database credentials using the DB Secret Engine.

The release also includes additional new features, secure workflow enhancements, general improvements, and bug fixes. The Vault 1.2 changelog provides a full list of features, enhancements, and bug fixes.

KMIP Server Secret Engine

Note: This is a Vault Enterprise feature

alt_text

Vault Enterprise 1.2 sees the introduction of the Key Management Interoperability Protocol (or KMIP) standard as a new method for automatically integrating with many enterprise software and hardware platforms for secrets management.

KMIP is an open OASIS protocol for managing encryption workloads and data. KMIP instructions focus on tasks including key generation, key management, and encryption between a KMIP Client (an application or system requesting a cryptographic task) and a KMIP Server (a platform performing that cryptographic task).

In Vault Enterprise 1.2, we are introducing a new Secret Engine that supports Vault serving as a KMIP Server for client requests. This allows Vault to integrate with an ecosystem of over a hundred common enterprise platforms for use cases such as Transparent Database Encryption (TDE); Full Disk Encryption (FDE) and virtual volume encryption; and multi-cloud/hybrid cloud key Bring Your Own Key (“BYOK”) key management.

1.2 is the beginning of our story with supporting the KMIP protocol and not every instruction or object will be supported with this release. We will continue to support additional KMIP instructions and objects over the course of the next few releases, and are in the process of certifying Vault Enterprise interoperability with major enterprise infrastructure vendors.

Please consult our guide on the KMIP Secret Engine and documentation on KMIP for more information.

Integrated Storage

Note: this is a preview release feature and Integrated Storage is currently not supported for production workloads in Vault Enterprise 1.2

Integrated Storage is a new feature in Vault 1.2 that allows Vault admins to configure an internal storage option for storing Vault’s persistent data at rest. Rather than using an external storage backend, Integrated Storage exists as a purely Vault internal option that leverages the Raft consensus protocol to provide highly available storage for Vault data. Our goal with integrated storage is to provide another option for managing Vault’s storage backend that doesn’t require proficiency in a separate tool or platform.

Integrated storage is being released as a tech preview feature, which means that we are not providing enterprise support for Vault Enterprise users deploying integrated storage in production in Vault 1.2. We also do not yet have storage migration support for this storage backend. We’re excited to release this new architecture with the community to gather feedback, and will officially add it to our supported reference architecture for Vault Enterprise in a later release.

For more information on integrated storage, see here.

Identity Tokens

Identity Tokens are OIDC compliant tokens that allow for third party applications to verify JWT-based claims on Vault identities. Using Identity Tokens, an application can verify a Vault entity, its group membership, and identity management system aliases without logging into Vault. Identity Tokens can also carry other metadata with them as desired that can be used for further identification/authorization uses..

For more information on Identity Tokens, see here.

Database Static Credential Rotation

We have extended the database secret engine to now manage and rotate credentials for preexisting users in addition to Vault’s longstanding ability to generate a temporary set of full credentials including username. This allows Vault to securely serve as the source of truth for applications that utilize a series of “known” database logins and service traditional Privileged Access Management (PAM) use cases involving static DB credentials.

For more on the Database Secret Engine, see here.

Other Features

There are many new features in Vault 1.2 that have been developed over the course of the 1.1.x releases. We have summarized a few of the larger features below, and as always consult the changelog for full details:

  • Vault API explorer: The Vault UI now includes an embedded API explorer where you can browse the endpoints available to you and make requests. To try it out, open the Web CLI and type api.
  • ElasticSearch database plugin: New ElasticSearch database plugin issues unique, short-lived ElasticSearch credentials.
  • Pivotal Cloud Foundry plugin: New auth method using Pivotal Cloud Foundry certificates for Vault authentication.
  • Vault Agent Namespace Support: Add optional namespace parameter, which sets the default namespace for the auto-auth functionality.
  • New UI Features: An HTTP Request Volume Page and new UI for editing LDAP Users and Groups have been added.

Upgrade Details

Vault 1.2 introduces significant new functionality. As such, we provide both general upgrade instructions and a Vault 1.2-specific upgrade page.

As always, we recommend upgrading and testing this release in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault mailing list.

For more information about HashiCorp Vault Enterprise, visit https://www.hashicorp.com/products/vault. Users can download the open source version of Vault at https://www.vaultproject.io.

We hope you enjoy Vault 1.2!

from Hashicorp Blog

HashiCorp and Microsoft Extend Multi-year Collaboration Agreement

HashiCorp and Microsoft Extend Multi-year Collaboration Agreement

HashiCorp is pleased to announce an extension of the multi-year collaboration agreement signed with Microsoft in 2017. The purpose of the initial agreement was to expand the capabilities of the Azure Terraform provider to include more resources and services for Azure users to leverage through Terraform. In the two years since that agreement was signed, Microsoft and HashiCorp have collaborated on a number of new enhancements for customers including:

In addition to the work done on the Terraform provider, Microsoft and HashiCorp announced a series of Vault, Consul, and Packer integrations. Details on these announcements can be seen in this blog by Brendan Burns. HashiCorp was also recently recognized for the achievements made in open-source contributions to Azure by being named the 2019 Partner of the Year for Open Source Infrastructure and Applications on Azure.

The extension of this collaboration agreement signifies a continued commitment by both companies to deliver a great experience while using HashiCorp tools on Azure. Customers can look forward to continued support from Microsoft and a robust Terraform provider built on co-engineering efforts.

We look forward to executing on additional growth and innovation with the Microsoft team.

from Hashicorp Blog

Announcing VCS-enabled Versioned Policy Sets in Terraform Enterprise

Announcing VCS-enabled Versioned Policy Sets in Terraform Enterprise

At the end of last year we introduced policy sets for HashiCorp Terraform Enterprise. Policy sets are a feature for users to enforce policies on select workspaces of their choice with Sentinel, the embeddable HashiCorp policy as code framework. They enable organizations to create logical groups of policies to apply against different environments and for different components of their infrastructure.

Today at HashiConf EU we are pleased to announce that policy sets may now be configured to source policies from version control systems (VCS), bringing all of the immutability benefits that users currently enjoy with Terraform configuration to Sentinel policies.

Immutability is a guiding principle in all of our products — part of the Tao of HashiCorp. Infrastructure management done responsibly is a versioned, auditable, repeatable, and collaborative process. All of these principles are exactly what Terraform Enterprise provides for infrastructure. With versioned policy sets, these same principles can now be applied to governance and policy management.

Writing Policies

Policy code can now be sourced directly from any VCS provider configured in Terraform Enterprise. For a complete list of supported VCS providers, see the VCS Integration documentation.

As a very simple and light-hearted example, we'll write some Sentinel policy against Terraform configuration using the random_pet resource from the random provider, which generates random names (e.g.: gentle-reindeer, daring-dodo). We'll store our policy and configuration in a repository on GitHub.com.

Writing Policies

The directory contains two files:
* sentinel.hcl – This is the configuration file which identifies Sentinel policy files and provides their configuration, written in HCL. Our configuration file specifies each policy to be checked within the set, as well as the enforcement level of the policy.


policy "must-have-three-words" {
enforcement_level = "hard-mandatory"
}

  • must-have-three-words.sentinel – This is the policy code itself. In our example, we'll ensure that each pet name resource is configured to generate at least three words (really-superb-toucan would be valid, but epic-cod would not). Multiple policy blocks may be defined in the sentinel.hcl file to configure more policies.

“`

Enforces each pet name to have at least three words.

import "tfconfig"

main = rule {
all tfconfig.resources.random_pet as _, pet {
pet.config.length >= 3
}
}
“`

With our policy code pushed to VCS, we're ready to configure Terraform Enterprise to use it.

Creating a Versioned Policy Set

When creating a new policy set, you are presented with a new user interface where you can select a Policy Set Source from your configured VCS providers, or opt to directly upload a new version of the policy code via API:

Creating Versioned Policies

Under "More options", you can also specify a path where your policies are stored in the remote repository (/pet-names in our example) as well as a non-default branch, if applicable.

After creating the policy set, you'll see it appear in the Policy Sets screen along with information like the repository name and latest commit SHA that policy code was sourced from.

Creating Versioned Policies2

Enforcing policy

With the policy set configured, Terraform Enterprise has sourced our policy from version control and will enforce it on a run. Given the following simple Terraform configuration in a workspace:

“`
resource "random_pet" "animal" {
length = 3
}

output "random" {
value = "${random_pet.animal.id}"
}
“`

Our policy check passes:

Enforcing Policies

The policy ensures that the random_pet resources in this run are configured to yield at least three names, which are generated after the run is applied:

Enforcing Policies2

As commits are pushed and merged to our policy source, Terraform Enterprise will receive those changes automatically and use the latest version of our policy as it evolves in subsequent runs. If your VCS provider fails or you have a custom CI/CD pipeline, you can easily configure the policy set to manually create new versions via API and upload the policies and configuration file in a single tar file (tar.gz).

Summary

With VCS integration and direct API uploads, versioned policy sets provide a first-class policy as code experience and are now the recommended way to manage Sentinel policies in Terraform Enterprise.

Versioned policy sets are now available in Terraform Cloud and will be available in the upcoming release of Terraform Enterprise. Documentation and further examples can be found in the Terraform Cloud Sentinel documentation. For more information on Terraform Cloud and Terraform Enterprise or to get started with your free trial, visit the Terraform product page.

from Hashicorp Blog

Announcing ServiceNow Integration for Terraform Enterprise

Announcing ServiceNow Integration for Terraform Enterprise

Today at HashiConf EU we are announcing the HashiCorp Terraform Enterprise integration for ServiceNow Service Catalog. Terraform Enterprise offers organizations an infrastructure as code approach to multi-cloud provisioning, compliance, and management. Organizations who adopt Terraform Enterprise want to provide self-service infrastructure to end-users within their organization. The integration with ServiceNow extends this capability so that any end-user can request infrastructure from the ServiceNow Service Catalog and Terraform Enterprise can provide an automated way to service those requests.

This blog will discuss self-service infrastructure with ServiceNow & Terraform Enterprise and the workflow, including:
* Integration Setup
* Ordering Infrastructure
* Provisioning and Policy Enforcement
* Request Completion

Self-Service Infrastructure with ServiceNow & Terraform Enterprise

ServiceNow provides digital workflow management, helping teams work quickly and efficiently with one another by offering a straightforward workflow for their interactions. The ServiceNow Service Catalog offers a storefront of services that can be ordered by different people in the organization. One common request between teams is for Cloud resources: a developer needs a fleet of machines to test out a codebase or the IT team in finance has a request for infrastructure to run their new accounting software. For organizations who use the ServiceNow Service Catalog, the requests can be submitted through ServiceNow and routed to the right team for Cloud Infrastructure.

Terraform Enterprise provides provisioning automation through infrastructure as code and security, compliance, and cost-sensitive policy enforcement against all resources as they are provisioned. Our newest integration connects the human workflow power of ServiceNow with the infrastructure workflow capabilities of Terraform Enterprise.

Terraform Enterprise Workflow

The Workflow

The native integration provides a simple and streamlined setup process for Terraform Enterprise and the ServiceNow Service Catalog. Once setup, the end-users can order services from Terraform Enterprise. Terraform Enterprise will execute provisioning and policy enforcement. Depending on the level of automation the IT Operations team has set up, this can be fully automated or have built-in checkpoints to allow for oversight.

Integration Setup

To set up the integration, an administrator connects ServiceNow and Terraform Enterprise using our integration template. That integration connects the VCS repositories containing the template configurations for your infrastructure to both Terraform Enterprise and ServiceNow, allowing teams to order infrastructure provisioned by Terraform through ServiceNow

Integration Setup

Ordering Infrastructure

Any user with access to the Terraform catalog can submit an order for infrastructure through the Service Catalog. Simply choose the Terraform catalog, pick the type of infrastructure, and click "Order".

Ordering Infrastructure

Ordering Infrastructure3

Ordering Infrastructure4

Ordering Infrastructure5

Provisioning & Policy Enforcement

When a user submits a ticket to order infrastructure, Terraform Enterprise uses the template configurations from the Setup step and creates a workspace, runs a plan, and then runs an apply if the plan passed all policy checks. You're able to see from the workspace name and description where it came from, and even follow a link out to the Service Catalog ticket.

Provisioning & Policy Enforcement

Provisioning & Policy Enforcement2

If a policy check had failed, the apply would not have run and the Terraform operator would need to take a look.

Complete Request

When Terraform Enterprise has finished the run successfully, the infrastructure information is sent directly to the Service Catalog ticket so the requester can begin using it.

Complete Request

The full workflow for ordering infrastructure from Terraform Enterprise can be seen in the following demo.

Getting Started

The ServiceNow Service Catalog integration is part of Terraform Enterprise. To learn more about self-service infrastructure visit https://www.hashicorp.com/products/terraform. To learn more about getting started with Terraform Enterprise visit https://www.hashicorp.com/go/terraform-enterprise.

from Hashicorp Blog

HashiCorp Consul 1.6: Dynamic Traffic Management and Gateways

HashiCorp Consul 1.6: Dynamic Traffic Management and Gateways

We are excited to announce the beta release of HashiCorp Consul 1.6. This release supports a set of new features to enable Layer 7 routing and traffic management. It also delivers a new feature, mesh gateway, that transparently and securely routes service traffic across multiple regions, platforms, and clouds.

Download Now

A year ago, Consul 1.2 introduced our service mesh solution, Consul Connect. Our initial release focused on solving security challenges at Layer 4 and leveraging Consul’s service discovery feature to provide service-to-service identity and trust. Since that point, Connect has added support for additional proxies (Envoy), L7 observability, a simpler way to enable Consul ACLs and TLS, and platform integrations with Kubernetes.

We're proud to announce a major milestone in realizing our vision for service mesh. With the release of Consul 1.6 we are adding features for traffic management at Layer 7 and enabling transparent, cross-network connectivity with Mesh Gateways. Of course, these features work across platforms, with continued first-class support for Kubernetes and easy deployment across more traditional environments on any cloud or private network. This delivers on HashiCorp's goal for Consul to enable multi-cloud-service networking.

Additional Layer 7 Features

In Consul 1.5 we announced support for Layer 7 observability using Consul Connect and Envoy. This is made possible by configuring Envoy sidecars that proxy all traffic in and out of their associated services. The proxies form a data plane that transports requests, while Consul Connect acts as a control plane that configures all the proxies and responds to dynamic changes in your workloads and network.

Starting in that 1.5 release, Consul users could implement observability at Layer 7 (connections, bytes transferred, retries, timeouts, open circuits, and request rates, response codes) by writing configuration entries that would configure the sidecar proxies to export metrics and tracing data.

Consul 1.6 introduces additional configuration entry types that enable advanced traffic management patterns for service-to-service requests. The additional configuration entry kinds, service-resolver, service-splitter, and service-router, enable increased reliability with advanced service failover, and deployment patterns such as HTTP path-based routing, and traffic shifting.

Users can create configuration entries in HashiCorp Configuration Language (HCL) or JSON, and interact with them via the Consul CLI or API. Below is an example of a service-splitter configuration that sends 10% of traffic to version two of a service, and 90% of traffic to version one. Notice the weight and filter criteria.


Kind = "service-splitter"
Name = "billing-api"
Splits = [
{
Weight = 10
ServiceSubset = "v2"
},
{
Weight = 90
ServiceSubset = "v1"
},
]

Operators can combine these configuration entries to apply advanced traffic management patterns to large infrastructures from a centralized location.

Consul client agents can automatically configure and reconfigure proxies without redeploying them. Because of this centralized configuration frees operators from the burden of managing a mix of service configuration files, load balancer configs, and application-specific routing definitions.

Consul Connect now gives operators centralized control of the sidecar proxies in their service mesh, even in complex infrastructure configurations.

Learn more in our docs on the L7 Traffic Management page.

Mesh Gateway

As organizations distribute their workloads across multiple platforms, data centers, and clouds, the underlying network becomes increasingly fragmented and complex. Services in their respective environments run on independent networks, leading to multiple network silos. Managing connections between multiple network environments is challenging. It requires careful network planning to avoid overlapping IP addresses and typically relies on point-to-point VPN, networking peering, or private links. These approaches add operational overhead to manage and troubleshoot.

Mesh gateways are Envoy proxies at the edge of a network, which enable services in separate networking environments to easily communicate with each other. They are configured by Consul using a similar mechanism as sidecar proxies. If a source service wants to connect with a destination service in a remote cluster/platform/cloud, the traffic is proxied through mesh gateways, which route the traffic to the destination service based on the Server Name Indication (SNI) that is sent as part of the TLS handshake. Because SNI is part of TLS, mesh gateways doesn't (and can't) decrypt the data of the payload and have no special access to it, which keeps data safe even if a mesh gateway is compromised. Mesh gateways are a routing tool for service-to-service connections and are not suitable for general purpose ingress from non-mesh traffic.

Consul’s Kubernetes integrations have been updated to enable Kubernetes users to easily deploy gateways. This enables services inside Kubernetes environments to communicate with services running on other platforms without complex configuration.

Learn more in our docs on the Mesh Gateways page.

Intentions and Certificate Replication

Today we are open sourcing intention and certificate authority replication features that we originally shipped as part of Consul Enterprise in Consul 1.4. These features make connecting services across logical datacenters in Consul possible, and are vital building blocks for mesh gateways. We recognize that as organizations of all sizes are increasingly challenged to manage a variety of infrastructure components across public cloud, private networks, and diverse runtime platforms such as Nomad or Kubernetes.

Conclusion

You can get started now by following our new guide on Connecting Services Across Networks with the new mesh gateway feature.

Review the v1.6.0-beta1 changelog for a detailed list of changes. Release binaries can be downloaded here. We also welcome your bug reports or feedback on GitHub, and any questions in the new Community Portal.

We encourage you to experiment with these new features, but recommend against using this build in a production environment. Depending on feedback and resolving outstanding issues, we may release further betas, release candidates, and will publish a final generally available 1.6 release when appropriate.

Thank you to our active community members who have been invaluable in adding new features, reporting bugs, and improving the documentation for Consul in this release!

from Hashicorp Blog

Announcing the Full Schedule for HashiConf in Seattle

Announcing the Full Schedule for HashiConf in Seattle

This September we are excited to bring HashiConf, HashiCorp’s fifth annual and flagship community conference, to Seattle. With 40+ sessions, 12 in-depth trainings, product releases, and the chance to connect with HashiCorp engineers and product experts, we hope that you will find this year’s conference schedule as inspiring as the many members that make up the community itself.

Here are just some of the highlights you can enjoy this year at HashiConf.

Terraform

Vault

Consul

Nomad

In addition to the breakout sessions, HashiConf’s Training Day provides comprehensive, full-day training sessions on Vault, Consul, Nomad, Terraform, Vault Enterprise, and Terraform Enterprise taught by HashiCorp engineers and experts.

Register now for your ticket and connect with the HashiCorp community for three days of education, collaboration, and connection in Seattle this September.

We hope to see you there.

from Hashicorp Blog