Tag: Hashicorp Blog

Join HashiCorp OSS projects for Hacktoberfest 2019

Join HashiCorp OSS projects for Hacktoberfest 2019

Every October, open source projects participate in a global event called Hacktoberfest. The event promotes open source and helps beginners learn to contribute. This year HashiCorp's Terraform AWS Provider and Nomad projects would like your help! We hope to spread the word about our OSS work and connect with practitioners who are interested in learning how to contribute.

What’s Hacktoberfest?

The rules are simple: submit four valid pull requests to an open project on GitHub, and your reward is a t-shirt! Last year 46,088 people completed the challenge. The four pull requests can be across any open source project. For more information, see the FAQ for Hacktoberfest. The challenge completes at the end of the month.

Hacktoberfest for HashiCorp

If you’re interested in helping with HashiCorp projects, check out the Terraform Provider for AWS and Nomad.

First, start reviewing the contribution guidelines for each project (Terraform AWS Provider Contributing guides), where you’ll find out what we look for in a pull request. Then, search open issues for something that seems interesting to try. We encourage first time contributors to find a small documentation fix or bug report to work on and submit a PR.

Don’t be afraid to ask questions! If you need some help, drop a reply to the “Hacktoberfest 2019” topic in our community forum and one of the team can help.There’s all sorts of new things to learn. We’ll review your pull request within a week, as per Hacktoberfest guidelines and give some feedback!

With Hacktoberfest, we hope to spread the word about our OSS work and connect with practitioners who are interested in learning how to contribute.

from Hashicorp Blog

Demonstrating HashiCorp Tools with Dance Dance Automation

Demonstrating HashiCorp Tools with Dance Dance Automation

When we build demonstrations for conferences or events, we want to highlight unique use cases for HashiCorp’s open source tools. In this post, we outline how we built Dance Dance Automation to demonstrate the use of HashiCorp Nomad, Terraform, and Consul and document some of the challenges along the way.

We debuted the first iteration of Dance Dance Automation at HashiConf 2019 and will continue to build upon it further. The game consists of a game server hosted on a Nomad cluster, connected with Consul Service Mesh, and provisioned by Terraform.

Game Objectives

In Dance Dance Automation blocks are laid out on screen to the beat of the music, and the player has to tap the corresponding pad as the block passes by. Each block corresponds to an allocation on HashiCorp Nomad. If the player successfully times the tap of the pad to the block, the game stops the corresponding allocation. If the player manages to stop allocations faster than Nomad can reschedule them, bonus points are given.

A countdown for the game starting, a set of three lanes, and blocks that must match the keypad

The ID of the allocation flashes above each block, and when there is no allocation, the label becomes null. The player receives 10 points for stopping an allocation successfully and 50 points for outpacing the scheduler with a “null” allocation. The game includes a multiplayer mode for players to compete against each other.

Next, we take a closer look at the architecture that runs Dance Dance Automation.

Architecture diagram with a database and Consul cluster on Google Cloud Platform

Game Server on Nomad

The server components of Dance Dance Automation run on a Nomad cluster residing in Google Cloud Platform. Invisible to the player, these processes track allocations, assign them to players, organize games, and manage scores.

All Games, scores, and allocations are persisted in a PostgreSQL database. The game server and applications in the Nomad cluster (such as Cloud Pong, which we feature in another post) can be stopped as part of the game.

Networking with Consul

Within the Nomad cluster, the game server leverages Consul 1.6 Mesh Gateway and Service Mesh features. We use Consul to connect the game server to the database and observe the flow of traffic from game clients to the server. When the game server connects to the database and other instances, Consul Service Mesh encrypts all of its requests with mTLS.

When the game clients (specifically what the player sees) connect to a game server, an API gateway built by combining Envoy and the Consul Central Configuration, proxies the request to the backend game server. As a future step, we intend to use Mesh Gateways to connect to game servers across multiple clusters in different clouds.

Deployment with Terraform

Using Terraform Cloud, we provisioned all of the game components, from cluster to game server. When we make any changes to the configuration, we push to a repository containing a Terraform configuration. Terraform Cloud receives the webhook event from GitHub and starts a plan. We found this particularly useful when one of us made changes to the Nomad job configuration while another fixed the DNS subdomains. Not only did Terraform lock the state but also enabled a review of the changes and affected dependencies.

Game Client

When players enter the game, they interface with the game client. We built the game client with Godot, an open source game engine. When the application begins, it makes a call to the game server on Nomad to register the player and enter the lobby during multiplayer mode.

Initial landing screen for game, where a user selects their initials and joins a lobby

Specific actions in the game trigger API calls to the game server. For example, game completion triggers an API call to leave the game and post the high score.

We produced both the graphics and the audio for the game. To coordinate the blocks that players must match, we used the MBoy Editor, an application created for a similar Godot game, to flag each note and coordinate across the three tracks.

Players can interact with the game with keyboard controls but for increased fun factor, we wanted to create a set of pads for players to “dance”.

A player using the keyboard to play Dance Dance Automation

Hardware

Initially, we wanted to run the game on the Raspberry Pi and connect our fabricated dance pads to the GPIO pins. However, we did not find the game’s performance on the Raspberry Pi ideal and had to settle on an alternative solution.

Since the game was built with keyboard controls as a back-up mechanism, we mapped the signals from the dance pads to specific key presses. In order to run the game on more powerful hardware than the Raspberry Pi, we had to proxy those key presses from the Pi and map them to a laptop. To accomplish this, we created a server (the "button server") on the Raspberry Pi that would translate GPIO inputs into HTTP requests, that were then forwarded to a server (the "key server") on the client machine that receives API calls from a Raspberry Pi and maps it to key presses. We call this the “key server”. The Raspberry Pi also hosts an “LED server”, which accepts HTTP calls from the client and using GPIO then turns on the corresponding dance pad.

A diagram of a floor tile being pushed and trasmitting the signal to pins on a Raspberry Pi

The hardware presented some significant challenges, as electrical contacts and physical damage would affect the player’s experience. We spent quite some time soldering and repairing contacts, even taking a trip to a hardware store for additional wiring and supplies. Due to defects in some of the dance pads, we had to swap out our Packer pad for other working pads (which is why in photographs, you might see two Nomad or Vault pads!).

Two engineers soldering hexagonal dance tiles

Conclusion

When we set the dance pads on the floor of HashiConf 2019, we did get a few participants successfully playing on the pads. Some players even outpaced the Nomad scheduler with their skills for a few blocks!

We will continue to improve and extend Dance Dance Automation by fixing the hardware issues of our first iteration and adding more of the HashiCorp tools to the architecture. We hope to integrate Vault as a way of rotating our database credentials and use more of Consul’s service mesh to control traffic between game servers and clients. For practical use, the game will be optimized for the Raspberry Pi, to eliminate the need for additional computers. Once we improve the performance of the game, we might create an immutable Raspbian image with Packer so we can create multiple clients quickly.

A player on dance tiles stepping to the game

If you have any questions or are curious to learn more, feel free to reach out on the community forum, on the HashiCorp User Groups, Events, & Meetups topic. Stay tuned for Dance Dance Automation coming to a HashiCorp community event near you!

from Hashicorp Blog

Announcing the Terraform Plugin SDK

Announcing the Terraform Plugin SDK

We’re pleased to announce the release of the Terraform Plugin SDK, a standalone Go module for developing Terraform providers.

Terraform is its Providers

Terraform providers are the crucial component that allow Terraform to represent almost any infrastructure type or service API as a resource in a simple, declarative configuration language. They are central to the day-to-day experience of using Terraform.

With the release of Terraform 0.10, Terraform providers were split from the Terraform Core codebase, and are since versioned separately. This change unlocked the potential for a thriving ecosystem of Terraform providers: the core providers maintained by HashiCorp, and the large number of high-quality third-party providers developed by our partners and the open-source community.

We want to maintain the excellent practitioner experience that Terraform is known for, while making provider development easier and safer.

The Terraform Plugin SDK v1.0.0: Democratizing Terraform Provider Development

Until now, the Plugin SDK was part of the Terraform Core codebase. Developers writing Terraform providers needed to import Terraform Core (https://github.com/hashicorp/terraform) as a library, and make use of an implicit SDK living mainly in the helper/ directory. Splitting the Plugin SDK out of the Core codebase, like the providers in 0.10, allows us to give it more meaningful versions, and iterate more quickly on features and bug fixes.

While the critical types and interfaces for developing providers are well documented in the code, there is a high barrier to entry in determining which parts of Core pertain to the Terraform CLI, and which are available for developing providers. Additionally, importing Terraform Core pulls in a large number of Go dependencies that are unnecessary for developing Terraform providers.

The Terraform Plugin SDK extracts this implicit SDK from Terraform Core into a standalone Go module. Terraform providers no longer need to import Terraform Core, and should import the Terraform Plugin SDK instead, whose API surface is explicitly available for Terraform provider functionality. We hope this change lowers the barrier to entry for creating Terraform providers.

Terraform Plugin SDK v1.0.0 is designed for maximum compatibility with existing providers importing Terraform Core v0.12. The SDK is versioned separately from Core. Improvements to the SDK will start from this baseline, following a Semantic Versioning scheme compatible with Go modules. The informal SDK within the Core repository is now deprecated, and will be removed in a future version.

The interface between the Terraform Plugin SDK and Terraform Core is a gRPC wire protocol described in a single protobuf file. For more technical details, please see the plugin-protocol documentation. Currently, providers are the only type of plugin supported by the Plugin SDK.

Getting Started with the Terraform Plugin SDK

If you are a Terraform provider developer, we recommend that you switch from Terraform Core to the Terraform Plugin SDK. The Terraform team has created a migrator CLI tool to make this a simple process.

For a step-by-step guide, please see our documentation:
https://www.terraform.io/docs/extend/plugin-sdk.html

from Hashicorp Blog

Videos from HashiConf 2019: Keynotes and Breakout Sessions

Videos from HashiConf 2019: Keynotes and Breakout Sessions

This year’s HashiConf was a return to HashiCorp’s spiritual birthplace: Seattle, Washington. HashiCorp co-founders Armon Dadgar and Mitchell Hashimoto first met as students at the nearby University of Washington and began to work on early versions of public cloud infrastructure. What they observed were many of the same core problems cloud users are having today:

  • Difficulty standardizing and codifying infrastructure deployments
  • The absence of a strong security perimeter in the cloud
  • Complexity in the process of networking with the appearance of exponentially more application services

These core challenges became the focus for HashiCorp’s products. These products—and the way they are being used to enable multi-cloud infrastructures—were the focus of this year’s HashiConf presentations. Organizations such as Google, Microsoft, Starbucks, Xfinity Mobile, Datadog, Kong Cloud, State Farm, and Petco spoke about how they use HashiCorp Terraform, Vault, Consul, Nomad, Packer, and Vagrant.

A sold-out crowd of over 1,600 attendees this year—400 more than joined last year’s conference—saw case studies, demos, and keynote sessions introducing Terraform Cloud’s GA feature set and HashiCorp Consul Service on Azure: the first fully managed service mesh. Today, we have all 40+ breakout sessions and keynotes available to watch. They are listed below in the order the sessions were presented.

A number of the videos listed below include slides and transcripts, and more are being added soon. Several breakout session and hallway track speakers have shared their HashiConf slides in this community forum thread. You can also visit this link to filter talks by product, case studies, demos, and more.

Day One

Day Two

See you again in 2020

HashiConf US 2020, our flagship community conference in North America, will take place in San Diego, CA October 13-15, 2020. Early Bird tickets are already available..

HashiConf EU 2020, for our European community, will once again take place in Amsterdam, NL June 8-10, 2020. Book your Early Bird ticket here.

Interested in speaking at a HashiConf next year? The CFP for both conferences will open on January 9, 2020, so please check the above sites then.

from Hashicorp Blog

Configuring Third-Party Load Balancers with Consul: NGINX, HAProxy, F5

Configuring Third-Party Load Balancers with Consul: NGINX, HAProxy, F5

In a monolithic environment manually configuring load balancers is fairly straightforward, but as you transition to dynamic infrastructure or microservices, it becomes more painful. Suddenly instead of having as many static IP addresses as you have application instances, you are faced with tens, hundreds, or thousands of IP addresses per service, which change on the order of days, or even hours. This situation is impossible to maintain manually, but Consul can automate it.

Consul can dynamically configure third-party load balancers with a list of IPs for each service and update that list automatically. We currently have documented integration examples for NGINX, HAProxy, and F5, each of which demonstrates a different method of integration.

Consul’s flexibility makes it possible to configure most applications, including load balancers, with service discovery data as well as key-value data. Let’s look at some examples of our load balancer integrations.

NGINX integration using Consul Template

Consul Template is a tool based on Go templates that takes an input file, populates it with data from Consul, outputs an application configuration file when that data changes, and can execute an arbitrary command to trigger its target service to reload the configuration.

Consul Template can update NGINX configuration with the list of IP addresses for a given service. You can also use it to update other configuration stored in Consul as key-value pairs. Try out our NGINX integration by following the Learn guide.

Load Balancing with NGINX and Consul Template

NGINX Plus has direct integration with Consul DNS and doesn’t require running Consul Template. Keep an eye out for the guide, coming soon.

HAProxy Native Integration with Consul DNS

HAProxy version 1.8+ (LTS) includes server-template, which lets users specify placeholder backend servers to populate HAProxy’s load balancing pools. Server-template can use Consul as one of these backend servers, requesting SRV records from Consul DNS. Follow the below guide to try out the HAProxy integration.

Load Balancing with HAProxy Service Discovery Integration

F5 BIG-IP Integration with Consul’s HTTP API via AS3

F5 BIG-IP is an application service platform that offers sophisticated load balancing and traffic visibility, among other features. Application Services 3 (AS3) is an extension to BIG-IP that makes programmatic API calls to other services. In this case, AS3 queries the Consul HTTP API for service instances on a set interval and uses the returned IP addresses to update BIG-IP’s instance pool. Consul is designed for programmatic interaction and you access almost all of its functionality via the HTTP API.

Explore the integration by following our step-by-step command line guide, or if you aren’t ready to try it out yourself, watch the included webinar recording.

Load Balancing with F5 and Consul

Learn More

As you can see, Consul integrates with a wide range of load balancers and offers a number of interfaces for application integration. If you’d like to configure a service using data from Consul, the easiest place to start is with Consul Template. For the richest experience, the HTTP API is documented here.

from Hashicorp Blog

Quantum Security and Cryptography in HashiCorp Vault

Quantum Security and Cryptography in HashiCorp Vault

As quantum computers grow in power and reliability, we at HashiCorp have been asked a number of questions about how we plan on protecting Vault against quantum threats.

Quantum computing has the potential to seriously change how we think about cryptographic security. By exploiting quantum mechanics such as entanglement and quantum parallelism, quantum computers can run cryptanalysis algorithms capable of simplifying the math protecting many popular forms of encryption.

With modern computing power, attacking RSA 2048 using a number sieve should take a few orders of magnitude longer than the expected lifetime of our galaxy. With a sufficiently powerful quantum computer, we can expect to break the same encryption in roughly thirty minutes.

Publicly known quantum computers at the time of this writing are incapable of attacking modern cryptography with quantum cryptanalysis; modern publicized quantum computers are about two orders of magnitude less powerful than the above machine. But rapid advances in photonic physics and error correction are allowing us to advance the power of quantum computers at a doubly exponential rate – several orders of magnitude faster than how digital computers have evolved under Moore’s Law.

This rapid acceleration of quantum computing technology ensures that if one intends on protecting secrets for the future they must begin to think about quantum threats today. And for the last few years the Vault team has quietly been doing just that.

Community Efforts Around Quantum Security

Since 2016 the National Institute of Standards and Technology (NIST) has led the Post Quantum Cryptographic Standardization Process (PQCSP). PQCSP’s goal is to review and endorse new algorithms resilient against known quantum cryptanalysis, and stop endorsing algorithms known to be vulnerable against quantum threats. The cryptographic community has rallied around these efforts and many security vendors are preparing to implement the output of PQCSP to protect their products against advanced adversaries in the future.

We on the Vault team have been closely following NIST’s efforts on post-quantum security since the process’ inception and plan on implementing guidance from NIST before draft changes to NIST SP800 documents (and correspondingly regulations such as FIPS 140-2) are introduced in 2022-2024.

But not all Vault users follow NIST’s guidelines or FIPS 140-2. Vault is a global project, and both Vault and Vault Enterprise are currently used to protect secrets across the world – even in low earth orbit! As such, we deal with quantum security holistically in Vault during the design and development of each release.

Quantum Security in Vault

When we introduce features in Vault that have implications on Vault’s security or involve cryptography, we also review whether this change exposes Vault to known quantum cryptanalysis. In particular we consider two quantum algorithms: Shor’s Algorithm and Grover’s Algorithm.

alt_text

Shor’s Algorithm (or simply Shor’s) exploits quantum mechanics to dramatically reduce the difficulty of factoring large prime numbers via the use of a Quantum Fourier Transform. Shor’s reduces the computational difficulty of this task such that algorithms who rely on the difficulty of factoring prime numbers – for example ciphers such as RSA and Diffie Hellman – are vulnerable to attack by sufficiently powerful quantum computers.

alt_text

Grover’s Algorithm (or simply Grover’s) exploits quantum parallelism to quickly search for the statistically-probable input value of a black-boxed operation. Grover’s does not yield attacks that invalidate whole fields of encryption like Shor’s. But it does reduce the difficulty of intelligently searching for the keys of symmetric key encryption via brute force search.

When implementing new cryptography in Vault, and reviewing the cryptography protecting existing critical security parameters in Vault, we review whether Shor’s and Grover’s algorithms have implications on that cryptography’s security against adversaries armed with sufficiently-powerful quantum computers.

Post-Quantum Cryptography in Vault

Quantum computing is not always destructive to security. There are a number of new (and in some cases renovated) ciphers and cryptographic techniques being introduced to deal with threats powered by quantum computers. When peer reviewed implementations of this cryptography are available, we look to support them in Vault.

An example of this is the chacha20-poly1305 cipher. Chacha20-poly1305 is a stream cipher that was introduced as an alternative to AES to deal with future cryptanalytical attacks.

Grover’s presents the possibility that future quantum computers may be able to attack symmetric key cryptography, thereby reducing the difficulty of breaking some modes of AES roughly by half. In response, chacha20-poly1305 has begun to be endorsed as an alternative in the face of future quantum cryptanalysis.

We have been tracking this conversation about AES and chacha20-poly1305 for several years. Since Vault 0.9.4 (2018), we have supported chacha20-poly1305 within the Transit secret engine. This allows Vault users to use chacha20-poly1305 for all transit encrypt/decrypt operations, including with convergent encryption and with key derivation.

We have also implemented mechanisms in Vault to handle seals-migration. While ostensibly this is to support the migration from Shamir’s Key Shares to different forms of auto-unseal, we respect that this may be used in the future to allow Vault to comprise its cryptographic barrier keys with future post-quantum cryptography.

Our implementation of AES 256-GCM, which we use to comprise the cryptographic barrier for Vault’s data at rest, is resistant against most known quantum attacks. But we respect that this may change in the decades to come as quantum computing enters more into the mainstream.

Post-Quantum Key Distribution and Entropy Protection

Quantum computing presents a number of interesting new technologies for augmenting system security. Techniques such as Quantum Key Distribution (QKD) allow for quantum entanglement and superposition to provide mechanisms for securely distributing cryptographic keys over large distances in a way that highlights attempts to tamper or eavesdrop with that communication.

While implementations of QKD exist – most notably the Chinese "Micius" QKD satellite – implementing QKD for protecting commercial data is currently untenable. Still, we have the capabilities today in Vault to adopt implementations of QKD for auto-unseal and transit when they become available in the future.

What is available in modern quantum computers is lots of random data or entropy. Even today’s limited qubit quantum computers generate extremely random sets of entropy in normal operation. This entropy is extremely valuable in cryptography as operations such as key generation for symmetric key crypto like AES or generating ephemeral session keys for SSH/TLS require robust entropy sources for random number generation.

In upcoming versions of Vault Enterprise we will release features that allow Vault to sample entropy from external sources. Some of these external sources will include quantum computing sources, including hardware security modules who employ quantum cryptographic number generators.

The Road Ahead

Vault’s mission is to secure any kind of information for any kind of infrastructure. As quantum computing becomes part of the infrastructure stack, and quantum threats become part of one’s threat model, we stand ready to adopt new technology in support of our ongoing mission.

from Hashicorp Blog

Building Richer Interactions in the HashiCorp Community

Building Richer Interactions in the HashiCorp Community

Our HashiCorp User Group program is continuing to evolve as the community steadily grows; new chapters are spinning up weekly around the globe. Over 170 organizers continue to engage their local chapters by creating environments for learning, sharing, and discussing HashiCorp tooling.

Recently, organizers have asked for input and best practices on alternative HUG programming. Several have shared excellent ideas and we want to share them with the rest of the community. A number of organizers are steering away from the traditional lecture style format and increasing engagement with more interactive agendas.

We are pleased to support our organizers in brainstorming and implementing these ideas.

Lightning Talks

Lightning talks often consist of 5 or 6 speakers giving 10 minute talks or demos with a 5 minute buffer for questions and answers. The Oslo, Philadelphia, Seattle, and New York City HUGs are hosting lightning talks.

The lightning talk format gives more practitioners access to the opportunity to speak at Meetups by lowering the barrier to entry. Additionally, organizers find that attendees are further engaged and are able to learn about a broader range of topics compared to a single lecture style talk that may be too advanced for beginners.

To source speakers, organizers pass along the opportunity to the community through a simple Meetup message and cross post on other local user groups. They are able to build their agenda based on the responses they receive.

Hands-on Labs

The Hamburg and Oslo HUG Organizers have hosted two interactive Meetups, so far.

The first was a Hands-on Lab that covered the HashiCorp product suite. We sent HashiCorp Developer Advocate, Erik Veld, to be the floating expert on site. The Meetup attendees were split into four rooms and gave each a specific product to workshop and encouraged people to move amongst the rooms. The second was an interactive session on Terraform where the speakers encouraged people to bring their laptops to share code and tackle problems and questions together.

People have different learning styles, some learn most through reading text, watching videos, or by doing. Interactive meetups give practitioners hands on experience and enable a more active type of learning. Organizers find that attending users are excited by accomplishing something at Meetup.

What’s next?

Out-of-the-box Meetups are well received and we love seeing the creative approaches organizers are taking to keep their communities engaged and informed like livestreams, game days, master classes, open mic nights, and more.

In the near future, we are going to offer certification training and exams through the HUG program. Keep a close eye on the schedule here to sign up for a certification Meetup with your local HUG chapter.

It is exciting to see organizers embracing their local communities and communicating an open call for involvement through ideas, speakers, venues, and partners.

Wondering how to get involved?

Find your local chapter and get in touch with the organizers. Attend and participate at a Meetup. If chapter does not exist in your city, reach out to us. We are always on the lookout for strong community leads!

Are you engaged in any conversations happening in our Community Forum? We have a category especially for HUGs and other community events.

from Hashicorp Blog

HashiCorp Consul at Cloud Field Day

HashiCorp Consul at Cloud Field Day

We are excited to announce that we will be participating in Cloud Field Day 6 (#CFD6) on Sept. 25, starting at 11:30 a.m. PDT, with an hour-long session focused on HashiCorp Consul.

Cloud Field Day is an independent IT influencer event where industry professionals participate in a delegate panel and use their expertise to ask questions and provide feedback about the tools being presented in order to learn and share more about the technology, applicable use cases, and the broader industry. We chose this event because of its technical in-depth focus and honest, open communication style.

During the HashiCorp portion of the event, HashiCorp co-founder and CTO, Mitchell Hashimoto and Technical Advisor to the CTO, Anubhav Mishra will give an overview of Consul. As HashiCorp’s cloud networking automation product, Consul has introduced service discovery and service mesh capabilities to help customers build fast, reliable, and scalable networks to support cloud-based applications. Mitchell and Mishra will provide a technical overview of the product, highlighting the newest capabilities for the broader technical community. The speakers and additional members of HashiCorp team will be present and participating in the event to to hear comments, feedback, and answer questions that come from the event — both from the in-person delegates as well as the live-stream audience.

Watch the live-stream of our portion of Cloud Field Day to hear details about HashiCorp and how Consul helps simplify networking challenges in and across clouds. You can also follow along on with the discussion on Twitter using the #CFD6 hashtag.

If you would like to learn more about Consul, read the latest blog, check out our Consul Learn tracks and our Community Forum.

from Hashicorp Blog

Consul Connect Integration in HashiCorp Nomad

Consul Connect Integration in HashiCorp Nomad

At Hashiconf EU 2019, we announced native Consul Connect integration in Nomad available in a technology preview release. A beta release candidate for Nomad 0.10 that includes Consul Connect integration is now available. This blog post presents an overview of service segmentation, and how to use features in Nomad to enable end-to-end mTLS between services through Consul Connect.

Background

The transition to cloud environments and a microservices architecture represents a generational challenge for IT. This transition means shifting from largely dedicated servers in a private datacenter to a pool of compute capacity available on demand. The networking layer transitions from being heavily dependent on the physical location and IP address of services and applications to using a dynamic registry of services for discovery, segmentation, and composition. An enterprise IT team does not have the same control over the network or the physical locations of compute resources and must think about service-based connectivity. The runtime layer shifts from deploying artifacts to a static application server to deploying applications to a cluster of resources that are provisioned on-demand.

HashiCorp Nomad’s focus on ease of use, flexibility, and performance, enables operators to deploy a mix of microservice, batch, containerized, and non-containerized applications in a cloud-native environment. Nomad already integrates with HashiCorp Consul to provide dynamic service registration and service configuration capabilities.

Another core challenge is service segmentation. East-West firewalls use IP-based rules to secure ingress and egress traffic. But in a dynamic world where services move across machines and machines are frequently created and destroyed, this perimeter-based approach is difficult to scale as it results in complex network topologies and a sprawl of short-lived firewall rules and proxy configurations.

Consul Connect provides service-to-service connection authorization and encryption using mutual Transport Layer Security (mTLS). Applications can use sidecar proxies in a service mesh configuration to automatically establish TLS connections for inbound and outbound connections without being aware of Connect at all. From the application’s point of view, it uses a localhost connection to send outbound traffic, and the details of TLS termination and forwarding to the right destination service are handled by Connect.

Nomad 0.10 will extend Nomad’s Consul integration capabilities to include native Connect integration. This enables services being managed by Nomad to easily opt into mTLS between services, without having to make additional code changes to their application. Developers of microservices can continue to focus on their core business logic while operating in a cloud native environment and realizing the security benefits of service segmentation. Prior to Nomad 0.10, job specification authors would have to directly run and manage Connect proxies and did not get network level isolation between tasks.

Nomad 0.10 introduces two new stanzas to Nomad’s job specification—connect and sidecar_service. The rest of this blog post shows how to leverage Consul Connect with an example dashboard application that communicates with an API service.

Prerequisites

Consul

Connect integration with Nomad requires Consul 1.6 or later. The Consul agent can be run in dev mode with the following command:

bash
$ consul agent -dev

Nomad

Nomad must schedule onto a routable interface in order for the proxies to connect to each other. The following steps show how to start a Nomad dev agent configured for Connect:
bash
$ sudo nomad agent -dev-connect

CNI Plugins

Nomad uses CNI plugins to configure the task group networks, these need to be downloaded to /opt/cni/bin on the Nomad client nodes.

Envoy

Nomad launches and manages Envoy, which runs alongside applications that opt into Connect integration. Envoy acts like a proxy to provide secure communication with other applications in the cluster. Nomad will launch Envoy using its official Docker container.

Also, note that the Connect integration in 0.10 works only in Linux environments.

Example Overview

The example in this blog post enables secure communication between a web application and an API service. The web application and the API service are run and managed by Nomad. Nomad additionally configures Envoy proxies to run alongside these applications. The API service is a simple microservice that increments a count every time it is invoked. It then returns the current count as JSON. The web application is a dashboard that displays the value of the count.

Architecture Diagram

The following Nomad architecture diagram illustrates the flow of network traffic between the dashboard web application and the API microservice. As shown below, traffic originating from the dashboard to the API is proxied through Envoy and secured via mTLS.

Networking Model

Prior to Nomad 0.10, Nomad’s networking model optimized for simplicity by running all applications in host networking mode. This means that applications running on the same host could see each other and communicate with each other over localhost.

In order to support security features in Consul Connect, Nomad 0.10 introduces network namespace support. This is a new network model within Nomad where task groups are a single network endpoint and share a network namespace. This is analogous to a Kubernetes Pod. In this model, tasks launched in the same task group share a network stack that is isolated from the host where possible. This means the local IP of the task will be different than the IP of the client node. Users can also configure a port map to expose ports through the host if they wish.

Configuring Network Stanza

Nomad’s network stanza will become valid at the task group level in addition to the resources stanza of a task. The network stanza will get an additional ‘mode’ option which tells the client what network mode to run in. The following network modes are available:

  • “none” – Task group will have an isolated network without any network interfaces.
  • “bridge” – Task group will have an isolated network namespace with an interface that is bridged with the host
  • “host” – Each task will join the host network namespace and a shared network namespace is not created. This matches the current behavior in Nomad 0.9

Additionally, Nomad’s port stanza now includes a new “to” field. This field allows for configuration of the port to map to inside of the allocation or task. With bridge networking mode, and the network stanza at the task group level, all tasks in the same task group share the network stack including interfaces, routes, and firewall rules. This allows Connect enabled applications to bind only to localhost within the shared network stack, and use the proxy for ingress and egress traffic.

The following is a minimal network stanza for the API service in order to opt into Connect.

hcl
network {
mode = "bridge"
}

The following is the network stanza for the web dashboard application, illustrating the use of port mapping.

hcl
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}

Configuring Connect in the API service

In order to enable Connect in the API service, we will need to specify a network stanza at the group level, and use the connect stanza inside the service definition. The following snippet illustrates this

“`hcl
group "api" {
network {
mode = "bridge"
}

service {
name = "count-api"
port = "9001"

connect {
sidecar_service {}
}
}

task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
“`

Nomad will run Envoy in the same network namespace as the API service, and register it as a proxy with Consul Connect.

Configuring Upstreams

In order to enable Connect in the web application, we will need to configure the network stanza at the task group level. We will also need to provide details about upstream services it communicates with, which is the API service. More generally, upstreams should be configured for any other service that this application depends on.

The following snippet illustrates this.

“`hcl
group "dashboard" {
network {
mode ="bridge"
port "http" {
static = 9002
to = 9002
}
}

service {
name = "count-dashboard"
port = "9002"

connect {
sidecarservice {
proxy {
upstreams {
destination
name = "count-api"
localbindport = 8080
}
}
}
}
}

task "dashboard" {
driver = "docker"
env {
COUNTINGSERVICEURL = "http://${NOMADUPSTREAMADDRcountapi}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
“`

In the above example, the static = 9002 parameter requests the Nomad scheduler reserve port 9002 on a host network interface. The to = 9002 parameter forwards that host port to port 9002 inside the network namespace. This allows you to connect to the web frontend in a browser by visiting http://<host_ip>:9002.

The web frontend connects to the API service via Consul Connect. The upstreams stanza defines the remote service to access (count-api) and what port to expose that service on inside the network namespace (8080). The web frontend is configured to communicate with the API service with an environment variable, $COUNTING_SERVICE_URL. The upstream's address is interpolated into that environment variable. In this example, $COUNTING_SERVICE_URL will be set to “localhost:8080”.

With this set up, the dashboard application communicates over localhost to the proxy’s upstream local bind port in order to communicate with the API service. The proxy handles mTLS communication using Consul to route traffic to the correct destination IP where the API service runs. The Envoy proxy on the other end terminates TLS and forwards traffic to the API service listening on localhost.

Job Specification

The following job specification contains both the API service and the web dashboard. You can run this using nomad run connect.nomad after saving the contents to a file named connect.nomad.

“`hcl
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}

 service {
   name = "count-api"
   port = "9001"

   connect {
     sidecar_service {}
   }
 }

 task "web" {
   driver = "docker"
   config {
     image = "hashicorpnomad/counter-api:v1"
   }
 }

}

group "dashboard" {
network {
mode ="bridge"
port "http" {
static = 9002
to = 9002
}
}

 service {
   name = "count-dashboard"
   port = "9002"

   connect {
     sidecar_service {
       proxy {
         upstreams {
           destination_name = "count-api"
           local_bind_port = 8080
         }
       }
     }
   }
 }

 task "dashboard" {
   driver = "docker"
   env {
     COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
   }
   config {
     image = "hashicorpnomad/counter-dashboard:v1"
   }
 }

}
}
“`

UI

The web UI in Nomad 0.10 shows details relevant to Connect integration whenever applicable. The allocation details page now shows information about each service that is proxied through Connect.

In the above screenshot from the allocation details page for the dashboard application, the UI shows the Envoy proxy task. It also shows the service (count-dashboard) as well as the name of the upstream (count-api).

Limitations

  • The Consul binary must be present in Nomad's $PATH to run the Envoy proxy sidecar on client nodes.
  • Consul Connect Native is not yet supported.
  • Consul Connect HTTP and gRPC checks are not yet supported.
  • Consul ACLs are not yet supported.
  • Only the Docker, exec, and raw exec drivers support network namespaces and Connect.
  • Variable interpolation for group services and checks are not yet supported.

Conclusion

In this blog post, we shared an overview of native Consul Connect integration in Nomad. This enables job specification authors to easily opt in to mTLS across services. For more information, see the Consul Connect guide.

from Hashicorp Blog

HashiCorp Consul Enterprise Supports VMware NSX Service Mesh Federation

HashiCorp Consul Enterprise Supports VMware NSX Service Mesh Federation

Recently at VMworld 2019 in San Francisco, VMware announced a new open specification for Service Mesh Federation. This specification defines a common standard to facilitate secure communication between different service mesh solutions.

Service mesh is quickly becoming a necessity for organizations embarking upon application modernization and transitioning to microservice architectures. Consul service mesh provides unified support across a heterogeneous environment: bare metal, virtual machines, Kubernetes, and other workloads. However, some organizations may choose to run different mesh technologies on different platforms. For these customers, federation becomes critical to enable secure connectivity across the boundaries of different mesh deployments.

We have partnered with VMware to support the Service Mesh Federation Specification. This blog will explain how services running in HashiCorp Consul service mesh can discover and connect with services in VMware NSX Service Mesh (NSX-SM).

What is Service Mesh Federation

consul service mesh federation

Service Mesh Federation is the ability for services running in separate meshes to communicate as if they were running in the same mesh. For example, a Consul service can communicate with an NSX-SM service running in a remote cluster in the same way it would communicate with another Consul service running in the same cluster.

How Does Consul Enterprise Support Service Mesh Federation

Service Sync

The first step towards supporting federation is Service Sync: sharing which services are running on each mesh. To accomplish this, Consul Enterprise implements the Service Mesh Federation Spec via the new Consul federation service. The Consul federation service communicates with NSX-SM’s federation service to keep the service lists in sync so that each mesh is aware of each other’s services.

consul service mesh federation service

First, Consul sends the foo service to the remote federation service and receives the bar service.

consul service sync

Next, Consul creates a Consul bar service to represent the remote bar service.

Inter-Mesh Communication: Consul to NSX-SM

With services synced, Consul services can now talk to remote services as if they were running in the same cluster. To do this, they configure their upstreams to route to the remote service’s name.

In this example, the Consul foo service wants to call the NSX-SM bar service. We configure an upstream so that port 8080 routes to bar:

service {
name = "foo"
connect {
sidecar_service {
proxy {
upstreams = [
{
destination_name = "bar"
local_bind_port = 8080
}
]
}
}
}
}

Then from the foo service, we simply need to talk to http://localhost:8080:

$ curl http://localhost:8080
<response from bar service>

Under the hood, we’re using the Consul service mesh sidecar proxies to encrypt all the traffic using TLS.

Consul connect to nsx service mesh

Inter-Mesh Communication: NSX-SM to Consul

From the bar service running in NSX-SM, we can use KubeDNS to talk to the foo service in Consul:

$ curl foo.default.svc.cluster.local
<response from foo service>

This request will route to the Consul Mesh Gateway and then to foo’s sidecar proxy. The sidecar proxy decrypts the traffic and then routes it to the foo service.

Conclusion

Service mesh federation between Consul Enterprise and NSX-SM allows traffic to flow securely beyond the boundary of each individual mesh, enabling flexibility and interoperability. If you would like to learn more about Consul Enterprise’s integration with NSX-SM, please reach out to our sales representatives to schedule a demo.

For more information about this and other features of HashiCorp Consul, please visit: https://www.hashicorp.com/products/consul.

from Hashicorp Blog