Tag: Provision

Consul Connect Integration in HashiCorp Nomad

Consul Connect Integration in HashiCorp Nomad

At Hashiconf EU 2019, we announced native Consul Connect integration in Nomad available in a technology preview release. A beta release candidate for Nomad 0.10 that includes Consul Connect integration is now available. This blog post presents an overview of service segmentation, and how to use features in Nomad to enable end-to-end mTLS between services through Consul Connect.

Background

The transition to cloud environments and a microservices architecture represents a generational challenge for IT. This transition means shifting from largely dedicated servers in a private datacenter to a pool of compute capacity available on demand. The networking layer transitions from being heavily dependent on the physical location and IP address of services and applications to using a dynamic registry of services for discovery, segmentation, and composition. An enterprise IT team does not have the same control over the network or the physical locations of compute resources and must think about service-based connectivity. The runtime layer shifts from deploying artifacts to a static application server to deploying applications to a cluster of resources that are provisioned on-demand.

HashiCorp Nomad’s focus on ease of use, flexibility, and performance, enables operators to deploy a mix of microservice, batch, containerized, and non-containerized applications in a cloud-native environment. Nomad already integrates with HashiCorp Consul to provide dynamic service registration and service configuration capabilities.

Another core challenge is service segmentation. East-West firewalls use IP-based rules to secure ingress and egress traffic. But in a dynamic world where services move across machines and machines are frequently created and destroyed, this perimeter-based approach is difficult to scale as it results in complex network topologies and a sprawl of short-lived firewall rules and proxy configurations.

Consul Connect provides service-to-service connection authorization and encryption using mutual Transport Layer Security (mTLS). Applications can use sidecar proxies in a service mesh configuration to automatically establish TLS connections for inbound and outbound connections without being aware of Connect at all. From the application’s point of view, it uses a localhost connection to send outbound traffic, and the details of TLS termination and forwarding to the right destination service are handled by Connect.

Nomad 0.10 will extend Nomad’s Consul integration capabilities to include native Connect integration. This enables services being managed by Nomad to easily opt into mTLS between services, without having to make additional code changes to their application. Developers of microservices can continue to focus on their core business logic while operating in a cloud native environment and realizing the security benefits of service segmentation. Prior to Nomad 0.10, job specification authors would have to directly run and manage Connect proxies and did not get network level isolation between tasks.

Nomad 0.10 introduces two new stanzas to Nomad’s job specification—connect and sidecar_service. The rest of this blog post shows how to leverage Consul Connect with an example dashboard application that communicates with an API service.

Prerequisites

Consul

Connect integration with Nomad requires Consul 1.6 or later. The Consul agent can be run in dev mode with the following command:

bash
$ consul agent -dev

Nomad

Nomad must schedule onto a routable interface in order for the proxies to connect to each other. The following steps show how to start a Nomad dev agent configured for Connect:
bash
$ sudo nomad agent -dev-connect

CNI Plugins

Nomad uses CNI plugins to configure the task group networks, these need to be downloaded to /opt/cni/bin on the Nomad client nodes.

Envoy

Nomad launches and manages Envoy, which runs alongside applications that opt into Connect integration. Envoy acts like a proxy to provide secure communication with other applications in the cluster. Nomad will launch Envoy using its official Docker container.

Also, note that the Connect integration in 0.10 works only in Linux environments.

Example Overview

The example in this blog post enables secure communication between a web application and an API service. The web application and the API service are run and managed by Nomad. Nomad additionally configures Envoy proxies to run alongside these applications. The API service is a simple microservice that increments a count every time it is invoked. It then returns the current count as JSON. The web application is a dashboard that displays the value of the count.

Architecture Diagram

The following Nomad architecture diagram illustrates the flow of network traffic between the dashboard web application and the API microservice. As shown below, traffic originating from the dashboard to the API is proxied through Envoy and secured via mTLS.

Networking Model

Prior to Nomad 0.10, Nomad’s networking model optimized for simplicity by running all applications in host networking mode. This means that applications running on the same host could see each other and communicate with each other over localhost.

In order to support security features in Consul Connect, Nomad 0.10 introduces network namespace support. This is a new network model within Nomad where task groups are a single network endpoint and share a network namespace. This is analogous to a Kubernetes Pod. In this model, tasks launched in the same task group share a network stack that is isolated from the host where possible. This means the local IP of the task will be different than the IP of the client node. Users can also configure a port map to expose ports through the host if they wish.

Configuring Network Stanza

Nomad’s network stanza will become valid at the task group level in addition to the resources stanza of a task. The network stanza will get an additional ‘mode’ option which tells the client what network mode to run in. The following network modes are available:

  • “none” – Task group will have an isolated network without any network interfaces.
  • “bridge” – Task group will have an isolated network namespace with an interface that is bridged with the host
  • “host” – Each task will join the host network namespace and a shared network namespace is not created. This matches the current behavior in Nomad 0.9

Additionally, Nomad’s port stanza now includes a new “to” field. This field allows for configuration of the port to map to inside of the allocation or task. With bridge networking mode, and the network stanza at the task group level, all tasks in the same task group share the network stack including interfaces, routes, and firewall rules. This allows Connect enabled applications to bind only to localhost within the shared network stack, and use the proxy for ingress and egress traffic.

The following is a minimal network stanza for the API service in order to opt into Connect.

hcl
network {
mode = "bridge"
}

The following is the network stanza for the web dashboard application, illustrating the use of port mapping.

hcl
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}

Configuring Connect in the API service

In order to enable Connect in the API service, we will need to specify a network stanza at the group level, and use the connect stanza inside the service definition. The following snippet illustrates this

“`hcl
group "api" {
network {
mode = "bridge"
}

service {
name = "count-api"
port = "9001"

connect {
sidecar_service {}
}
}

task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
“`

Nomad will run Envoy in the same network namespace as the API service, and register it as a proxy with Consul Connect.

Configuring Upstreams

In order to enable Connect in the web application, we will need to configure the network stanza at the task group level. We will also need to provide details about upstream services it communicates with, which is the API service. More generally, upstreams should be configured for any other service that this application depends on.

The following snippet illustrates this.

“`hcl
group "dashboard" {
network {
mode ="bridge"
port "http" {
static = 9002
to = 9002
}
}

service {
name = "count-dashboard"
port = "9002"

connect {
sidecarservice {
proxy {
upstreams {
destination
name = "count-api"
localbindport = 8080
}
}
}
}
}

task "dashboard" {
driver = "docker"
env {
COUNTINGSERVICEURL = "http://${NOMADUPSTREAMADDRcountapi}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
“`

In the above example, the static = 9002 parameter requests the Nomad scheduler reserve port 9002 on a host network interface. The to = 9002 parameter forwards that host port to port 9002 inside the network namespace. This allows you to connect to the web frontend in a browser by visiting http://<host_ip>:9002.

The web frontend connects to the API service via Consul Connect. The upstreams stanza defines the remote service to access (count-api) and what port to expose that service on inside the network namespace (8080). The web frontend is configured to communicate with the API service with an environment variable, $COUNTING_SERVICE_URL. The upstream's address is interpolated into that environment variable. In this example, $COUNTING_SERVICE_URL will be set to “localhost:8080”.

With this set up, the dashboard application communicates over localhost to the proxy’s upstream local bind port in order to communicate with the API service. The proxy handles mTLS communication using Consul to route traffic to the correct destination IP where the API service runs. The Envoy proxy on the other end terminates TLS and forwards traffic to the API service listening on localhost.

Job Specification

The following job specification contains both the API service and the web dashboard. You can run this using nomad run connect.nomad after saving the contents to a file named connect.nomad.

“`hcl
job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}

 service {
   name = "count-api"
   port = "9001"

   connect {
     sidecar_service {}
   }
 }

 task "web" {
   driver = "docker"
   config {
     image = "hashicorpnomad/counter-api:v1"
   }
 }

}

group "dashboard" {
network {
mode ="bridge"
port "http" {
static = 9002
to = 9002
}
}

 service {
   name = "count-dashboard"
   port = "9002"

   connect {
     sidecar_service {
       proxy {
         upstreams {
           destination_name = "count-api"
           local_bind_port = 8080
         }
       }
     }
   }
 }

 task "dashboard" {
   driver = "docker"
   env {
     COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
   }
   config {
     image = "hashicorpnomad/counter-dashboard:v1"
   }
 }

}
}
“`

UI

The web UI in Nomad 0.10 shows details relevant to Connect integration whenever applicable. The allocation details page now shows information about each service that is proxied through Connect.

In the above screenshot from the allocation details page for the dashboard application, the UI shows the Envoy proxy task. It also shows the service (count-dashboard) as well as the name of the upstream (count-api).

Limitations

  • The Consul binary must be present in Nomad's $PATH to run the Envoy proxy sidecar on client nodes.
  • Consul Connect Native is not yet supported.
  • Consul Connect HTTP and gRPC checks are not yet supported.
  • Consul ACLs are not yet supported.
  • Only the Docker, exec, and raw exec drivers support network namespaces and Connect.
  • Variable interpolation for group services and checks are not yet supported.

Conclusion

In this blog post, we shared an overview of native Consul Connect integration in Nomad. This enables job specification authors to easily opt in to mTLS across services. For more information, see the Consul Connect guide.

from Hashicorp Blog

HashiCorp Consul Enterprise Supports VMware NSX Service Mesh Federation

HashiCorp Consul Enterprise Supports VMware NSX Service Mesh Federation

Recently at VMworld 2019 in San Francisco, VMware announced a new open specification for Service Mesh Federation. This specification defines a common standard to facilitate secure communication between different service mesh solutions.

Service mesh is quickly becoming a necessity for organizations embarking upon application modernization and transitioning to microservice architectures. Consul service mesh provides unified support across a heterogeneous environment: bare metal, virtual machines, Kubernetes, and other workloads. However, some organizations may choose to run different mesh technologies on different platforms. For these customers, federation becomes critical to enable secure connectivity across the boundaries of different mesh deployments.

We have partnered with VMware to support the Service Mesh Federation Specification. This blog will explain how services running in HashiCorp Consul service mesh can discover and connect with services in VMware NSX Service Mesh (NSX-SM).

What is Service Mesh Federation

consul service mesh federation

Service Mesh Federation is the ability for services running in separate meshes to communicate as if they were running in the same mesh. For example, a Consul service can communicate with an NSX-SM service running in a remote cluster in the same way it would communicate with another Consul service running in the same cluster.

How Does Consul Enterprise Support Service Mesh Federation

Service Sync

The first step towards supporting federation is Service Sync: sharing which services are running on each mesh. To accomplish this, Consul Enterprise implements the Service Mesh Federation Spec via the new Consul federation service. The Consul federation service communicates with NSX-SM’s federation service to keep the service lists in sync so that each mesh is aware of each other’s services.

consul service mesh federation service

First, Consul sends the foo service to the remote federation service and receives the bar service.

consul service sync

Next, Consul creates a Consul bar service to represent the remote bar service.

Inter-Mesh Communication: Consul to NSX-SM

With services synced, Consul services can now talk to remote services as if they were running in the same cluster. To do this, they configure their upstreams to route to the remote service’s name.

In this example, the Consul foo service wants to call the NSX-SM bar service. We configure an upstream so that port 8080 routes to bar:

service {
name = "foo"
connect {
sidecar_service {
proxy {
upstreams = [
{
destination_name = "bar"
local_bind_port = 8080
}
]
}
}
}
}

Then from the foo service, we simply need to talk to http://localhost:8080:

$ curl http://localhost:8080
<response from bar service>

Under the hood, we’re using the Consul service mesh sidecar proxies to encrypt all the traffic using TLS.

Consul connect to nsx service mesh

Inter-Mesh Communication: NSX-SM to Consul

From the bar service running in NSX-SM, we can use KubeDNS to talk to the foo service in Consul:

$ curl foo.default.svc.cluster.local
<response from foo service>

This request will route to the Consul Mesh Gateway and then to foo’s sidecar proxy. The sidecar proxy decrypts the traffic and then routes it to the foo service.

Conclusion

Service mesh federation between Consul Enterprise and NSX-SM allows traffic to flow securely beyond the boundary of each individual mesh, enabling flexibility and interoperability. If you would like to learn more about Consul Enterprise’s integration with NSX-SM, please reach out to our sales representatives to schedule a demo.

For more information about this and other features of HashiCorp Consul, please visit: https://www.hashicorp.com/products/consul.

from Hashicorp Blog

Announcing Clustering for Terraform Enterprise

Announcing Clustering for Terraform Enterprise

Today we are announcing the beta version of Clustering for HashiCorp Terraform Enterprise. Increasingly, organizations are adopting Terraform Enterprise as their standard provisioning platform and the new Clustering functionality enables them to easily install and manage a scalable cluster that can meet their performance and availability requirements. The beta version of Clustering supports installations of Terraform Enterprise in AWS, Azure, and GCP.

This blog will discuss the clustering capability in Terraform Enterprise, including:

A Horizontally Scalable, Redundant Cluster

Until today, Terraform Enterprise followed the industry standard of a single instance appliance. This was acceptable at first, but as organizations expand their usage of Terraform Enterprise and it becomes their central provisioning platform across business units, the requirements around performance and availability increase. Organizations expect to be able to:

  • Scale Terraform Enterprise horizontally to keep up with ever-growing workloads
  • Trust Terraform Enterprise will always be available to provision their most critical infrastructure

The new Clustering functionality in Terraform Enterprise meets both of these needs.

Scale to Meet Demand

As organizations look to spread the benefits of Terraform Enterprise across their businesses, performance becomes a top concern. Even with the largest instances available, end-users can experience sluggishness in the UI and long wait times for runs to complete.

Running Terraform Enterprise as a cluster solves these performance issues by spreading the workload across multiple nodes. All API requests, background processes, and Terraform runs are fanned out across the cluster, allowing any number of concurrent end-users and runs to be supported.

In fact, our default installation pattern now includes configuration options to enable autoscaling groups. Giving organizations the ability to respond to spikes in demand without human intervention.

Better Availability

Terraform Enterprise is responsible for the infrastructure behind some of the world’s most critical applications. Before today, organizations relied on a pair of instances with one acting as a passive standby to make sure Terraform Enterprise was always available.

While sufficient for some, the risk was too high for many organizations. The existing strategy required too much downtime to promote the passive instance, and then there was always the question of what would happen if both instances were lost.

Running Terraform Enterprise as a cluster mitigates these risks. The provided installer provisions nodes across availability zones by default, and running Terraform Enterprise across three or more nodes ensures zero downtime for most common failure events.

Easier to Install & Manage

Installation is now as simple as a terraform apply from an operator’s machine. Instead of configuring a single instance from the command line, operators can now use an official module from the Terraform Module Registry to deploy an entire cluster to AWS, Azure, or GCP.

That same Terraform config can then be edited later to reconfigure the cluster. For example, increasing the maximum size of the auto-scaling group before rolling Terraform Enterprise out to a new business unit.

Please see the public documentation or watch the demo below to see how simple it is now:

The journey begins and ends with infrastructure as code, just as it should.

Next Steps

While the Clustering beta already represents a big leap forward, we have bigger plans as we continue to build out the platform.

More Configuration Options

The Terraform modules we provide today are meant to cover the requirements of most straightforward solutions; however, they will not work out of the box for some unique situations. Operators are welcome to fork our modules to make them work for their environment, but our goal is to make the official modules extensible enough to eliminate this need.

HA without External Dependencies

The current Clustering Beta is the most reliable version of Terraform Enterprise we have ever built, but it still requires an external PostgreSQL server and blob storage to achieve a high level of availability. Our goal is to remove these external dependencies to make installing a highly available cluster even simpler.

Simpler Backup and Restore Procedure

Operators can expect to see our new snapshot tooling in beta over the next six weeks. Rather than relying on vendor-specific backups, we are building our own tool that makes your data completely portable. Controllable via the command line or an HTTP API, the new functionality will be the key to smooth production upgrades.

Getting Started

We are excited to release the beta version of clustering for Terraform Enterprise to help organizations with scalable performance, high availability, and simple install and management of the application. To learn more about clustering in Terraform Enterprise refer to the documentation here.

For more information on Terraform Enterprise or to get started with your free trial, visit the Terraform product page.

from Hashicorp Blog

Announcing Cost Estimation for Terraform Cloud and Enterprise

Announcing Cost Estimation for Terraform Cloud and Enterprise

Today we're releasing Cost Estimation for HashiCorp Terraform Cloud and HashiCorp Terraform Enterprise. Cost Estimation provides organizations insight into the cost implication of infrastructure changes before they’re applied. Included in this release is integration with policy as code workflows via Sentinel to enable preventative cost-control policies to be automatically assessed against changes.

Organizations that are using an infrastructure as code approach to manage their multi-cloud provisioning, compliance, and management requirements can find it challenging to understand the cost implications of a change before it is applied. Many have been relying on after-the-fact alerts from their cloud provider, using dedicated third party services that continually monitor changes in cost, or potentially waiting until they receive their end of month bill to understand the cost impact of their changes. This new capability now enables teams who manage their self-service infrastructure to view an estimate of changes in monthly cost from their cloud provider before applying any change. HashiCorp’s Sentinel allows cost-centric policies to be created and then automatically enforced in the Terraform workflow. Administrators then have the ability to approve significant changes or to completely prevent specific workspaces from exceeding predetermined thresholds.

This blog will discuss the cost estimation capability and the workflow including:

Pre-emptive cost visibility

One common challenge of cloud infrastructure adoption is enabling the practitioners deploying the changes to understand the financial impact of the changes they’re applying. Many no longer require direct access to the console of the cloud provider, so they don’t see the billing related details of the resources they are provisioning until long after those resources have been deployed. This can create a situation where those responsible for financial governance need to work with DevOps teams to retrospectively reduce the cost profile of infrastructure after it has been deployed. A task that is now more complicated and carries more risk than if there had been earlier intervention.

Addressing it from a more proactive standpoint, organizations can take the shift left approach: IT Ops researches problems that impact cost, collate and collect the data to formalize policies and influence actions across all DevOps teams, and enable everyone in the organization to take consistent actions based estimated infrastructure costs and company policy.

All teams using Terraform Cloud and Enterprise can now see an estimate of changes before they are applied by enabling “Cost Estimation” within the settings of their Terraform Organization:

Settings

Understanding the cost impact of changes

Once Cost Estimation is applied to an Terraform organization, all future runs will include an estimate of the updated total monthly cost for the resources that have changed, along with a separate estimate of the change in cost:

Cost Estimation

In the above example, you can see that the total monthly cost has increased by $14.98/mo, which brings the total cost of those resources up to $37.44/mo. This change increases the maximum allowed size of an Auto Scaling Group on AWS from 3 instances to 5. A simplified example of that config looks like:


resource "aws_autoscaling_group" "prod-web-servers" {
name = "prod-web-servers"
max_size = 5
min_size = 0
desired_capacity = 0
}

It’s worth pointing out that it is only the max_size argument which has changed here, the min_size and desired_capacity have remained as zero. While in effect this most likely means there is no immediate cost impact from this change (in this example zero is the desired capacity so there should be no instances running). The other consideration is that dynamic scaling rules could change this at any time up to the maximum threshold. As a result the estimate takes the maximum potential footprint into consideration when calculating the change.

Now practitioners applying a change can have before-the-fact visibility into the potential cost impact of a change. This makes it easier to identify simple mistakes in configuration that could have significant financial implications, collaborate with other business units to keep costs aligned with forecasts, and support early intervention and remediation workflows at the most cost effective and lowest risk time to adjust implementation.

Enforcing cost controls with HashiCorp Sentinel

Sentinel is a Policy as Code framework that’s integrated into multiple product offerings from HashiCorp. Sentinel within Terraform enables an organization to define policies that are enforced against infrastructure between the plan and apply phases of a Terraform run. Compared to many tools that scan existing infrastructure for policy infractions, Sentinel proactively prevents provisioning of out-of-policy infrastructure and gives teams the confidence that all changes they deploy and within the organization’s policy

These latest enhancements with cost estimation expand on that capability to ensure consistent financial governance is applied to infrastructure changes.

Escalation workflows for expensive changes

It can be difficult to find the right pragmatic balance between allowing teams the agility to provision the infrastructure they need with keeping costs aligned with the expected value of a project. This can then lead to approval workflows that require oversight from an individual or team to determine if changes in cost are reasonable, which in turn can slow delivery and in itself increase the cost of implementation.

A Policy as Code approach that takes advantage of the cost estimation features of Terraform means organizations can now set guidelines on what is an acceptable change that requires review and then only escalate when a change is in breach of the standard policy. This frees up time in the approval workflow by ensuring that team is only required to review genuine escalations, and the practitioners responsible for implementation are able to self-service in-policy changes with confidence.

In the example below you see a Terraform run which has breached a “soft” policy check:

Policy Soft Fail

You can see the failed state here is against the cost-control/modest-increases policy, and the entire run has now been halted and placed into a “Policy Override” state. This is because this policy has been written to prevent any single change that increases the cost more than $500/mo.

For this run to proceed, a Terraform user with admin or policy author permissions will be required to review the plan, provide an optional explanation for why the policy is being overridden, and then ultimately click “Override & Continue”.

The code to implement the policy is:

“`
import "tfrun"
import "decimal"

main = rule {
decimal.new(tfrun.costestimate.deltamonthlycost).lessthan(500)
}
“`

In this example, the recently announced tfrun import is used, along with the new decimal import to ensure working with data types that are consistent for dealing with currency. The next part is to access the change in monthly cost that has been estimated for this run ( tfrun.cost_estimate.delta_monthly_cost) and ensure it is less than 500.

Applying strict cost controls and automated oversight

There may also been known cost thresholds that should never be breached, and an automatic escalation into an approval workflow where a review would be unnecessary. An example could be preventing workspaces that are managing “development” environments from ever exceeding a maximum estimated monthly cost. With such a policy in place, developers have the freedom and confidence to experiment with any infrastructure configuration they desire without the risk of uncomfortable conversations later. It also expands their autonomy in self-service workflows as a breach in policy allows them to make pragmatic decisions about what infrastructure may be able to be deprovisioned to free additional budget for the new changes they wish to deploy.

Here you can see an example of a Terraform run that has had a policy “hard” fail:

Policy Hard Failure

The hard failure in this instance is the cost-control/max-budget policy which has been defined to prevent the total estimated monthly cost from exceeding $10,000/mo. The code to implement this policy is:

“`
import "tfrun"
import "decimal"

main = rule {
decimal.new(tfrun.costestimate.proposedmonthlycost).lessthan(10000)
}
“`

The code here is very similar to the previous example with two notable differences:

  • the value of tfrun.cost_estimate.proposed_monthly_cost which provides an estimated aggregate cost of the resources running in the workspace, rather than just the expected change in total cost.
  • the comparison value is updated to 10,000 (up from 500).

The video below demonstrates the cost estimation workflow with Sentinel policies and enforcement:

Getting Started

You’ve seen just a glimpse of the workflows supported by the cost estimation and policy features available as part of Terraform Cloud and Enterprise, and how they enable teams to self-service changes in dynamic environments while also giving them the confidence that they will stay within guidelines set by their organization.

For more information on Terraform Cloud and Terraform Enterprise or to get started with your free trial, visit the Terraform product page. To learn more about Terraform visit the HashiCorp Learn platform and see it in action.

from Hashicorp Blog

Announcing Terraform Cloud

Announcing Terraform Cloud

Earlier this year we announced Terraform Cloud Remote State Management: a free platform for users to more easily collaborate on Terraform configurations through remotely stored, versioned, and shared Terraform state files. Today, we’re excited to announce the full release of Terraform Cloud.

This release brings new features to enable Terraform user and teams to collaborate on and automate Terraform workflows. It includes remote operations, VCS connection, and unlimited workspaces, delivered through a web based application.

Terraform Cloud is free to get started and available today. You can sign up directly in the application here.

Here’s a short demo showing off some of the new features and workflow:

Background

Terraform began as an open source tool first released in 2014. We could not have predicted the tremendous community of contributors and users from around the world that would form around this tool. Today, the Terraform repository has almost 25,000 commits from almost 1,300 contributors.

The next development, Terraform Enterprise, introduced a powerful, scalable solution to address the needs of businesses using the open source tool. It made collaboration among teams easier and gave businesses governance control over their deployments. Some of the largest organizations in the world use Terraform Enterprise for provisioning, and compliance and management of their infrastructure.

Having served both individuals with Terraform Open Source and large organizations with Terraform Enterprise, we saw a new need to be met: the teams and businesses that fit in between. Today, with Terraform Cloud, we hope to bridge that gap by taking the best workflow for Terraform and making it accessible to everyone. This new product is free to use and includes features that help users and teams move quickly and collaborate on Terraform, while also making it easier to get started.

Features

Terraform Cloud is free to use for teams of up to 5 users and includes automation and collaboration features:

Automation
– VCS Connection (Github, Gitlab, Bitbucket)
– Remote Plans and Applies
– Notifications/Webhooks
– Full HTTP API

Collaboration
– State Management (Storage, History, and Locking)
– Collaborative Runs
– Private Module Registry
– Full User Interface

The automation features help streamline the workflow for deploying infrastructure as code with Terraform: store code in VCS and Terraform Cloud can automatically start runs based off pulls/merges. Additionally, users can trigger runs from the command line, the API, or from within the application UI.

The collaboration features help teams work together. With Terraform Cloud, teams of developers and operations engineers can collaborate on remote terraform runs through the VCS driven review/approval process. The private module registry allows those teams to easily share configuration templates and collaborate asynchronously using remote state file storage.

Paid Upgrades

On top of the freely available features there are two paid upgrades for Terraform Cloud: Team and Governance— As part of this launch we will be giving a free trial of these upgraded features to users until January 1, 2020.

The first upgrade is Team and it allows you to add more than 5 users, create multiple teams, and control permissions of users on those teams. The team upgrade is for organizations that need to enforce RBAC (Role Based Access Control).

The next upgrade available is Team & Governance. It includes the team upgrade as well as the ability to use Sentinel and Cost Estimation. Sentinel is a policy as code framework for enforcing fine-grained rules against your infrastructure, which can help organizations enforce compliance and manage costs. Cost Estimation gives users an estimated cost of infrastructure before it is provisioned. Sentinel and Cost Estimation can be used in conjunction for enforcing cost management policies against your infrastructure. See our offerings page for a detailed comparison between Terraform packages.

Improved Workflow

Terraform Cloud brings users a new workflow to streamline infrastructure provisioning:

First, Terraform users define infrastructure in a simple, human readable configuration language called HCL (HashiCorp Configuration Language). Users can choose to write unique HCL configuration files from scratch or leverage existing templates from either the public or private module registries.

Most users will store their configuration files in a VCS (Version Control System) and connect that VCS to Terraform Cloud. That connection allows users to borrow best practices from software engineering to version and iterate on infrastructure as code, using VCS and Terraform Cloud as a provisioning pipeline for infrastructure.

Terraform can be configured to automatically run a plan upon changes to configuration files in a VCS. This plan can be reviewed by the team for safety and accuracy in the Terraform UI, then it can be applied to provision the specified infrastructure.

Conclusion

Terraform Cloud marks the continued evolution of Terraform. To date, the depth and breadth of ecosystem and community around Terraform has driven global adoption. Now, those users want to work with the tool in teams and drive even more complex automation. We hope to meet those needs with this release. You can sign up for Terraform Cloud today here.

Also, in celebration of the launch we will be randomly selecting from anyone who signs up for Terraform Cloud today to win a HashiCorp Terraform SWAG package.

from Hashicorp Blog

Announcing HashiCorp Consul Service on Azure

Announcing HashiCorp Consul Service on Azure

We are pleased to announce the new HashiCorp Consul Service (HCS) on Azure, which is now in private beta. HCS on Azure enables Microsoft Azure customers to natively provision HashiCorp-managed Consul clusters in any Azure region directly through the Azure Marketplace. As a fully managed service, HCS on Azure lowers the barrier to entry for an organization to leverage Consul for service discovery or service mesh across a mix of VM, hybrid/on-premises, and Kubernetes environments while offloading the operational burden to the site reliability engineering (SRE) experts at HashiCorp. Azure-native identity and billing integrations enable an organization to adopt Consul without introducing any additional administrative burden.

HashiCorp Consul: Multi-Cloud Service Networking Platform

Consul’s service networking capabilities enable an organization to connect and secure services across any runtime platform or public cloud provider. HCS on Azure enables users to more easily leverage Consul’s key capabilities, including:

  • Service Discovery: Provide a service registry with integrated health checking to enable any service to discover and be discovered by other services

  • Service Mesh: Simplify service networking by shifting core functionality from centralized middleware to the end points. Consul’s service mesh functions include:

    • Dynamic Traffic Management: Enable advanced traffic management to support different deployment strategies and improve application resiliency
    • Service Segmentation: Encrypt communications and control access across services with mutual TLS and a native Envoy integration.
    • Observability: Enable networking metric collection to provide insights into application behavior and performance without code modifications
    • Mesh Gateway: Route traffic transparently and securely across Azure regions, private data centers, and runtime environments like AKS, Azure Stack, and HashiCorp Nomad.

HCS on Azure: How Does it Work?

HCS on Azure leverages the Azure Managed Applications platform to enable a user to natively provision Consul through the Azure console, while interfacing with the HCS control plane behind the scenes to perform the deployment and carry out all necessary operational tasks:

How HCS on Azure works under the hood

After subscribing to HCS on Azure within the Azure Marketplace, a user can create a Consul cluster by just selecting a few options to indicate the desired Azure region, Consul version, and network details:

Create an HCS on Azure cluster

Once the user initiates a cluster creation, the HCS control plane will be notified. The integration with the Azure Marketplace allows HCS to provision Consul servers directly into a resource group in the user's Azure subscription:

HCS on Azure cluster details

After the provisioning step completes, any authorized user can view and interact with Consul via the standard Consul Web UI within the Azure console:

HCS on Azure Consul Web UI

Workflows to support backups, monitoring, federation, access control, and TLS encryption will be detailed in a future publication when HCS on Azure becomes generally available.

Benefits for Azure Customers

HCS on Azure enables any organization that runs at least part of its infrastructure in Azure to adopt Consul with a minimum of operational overhead, which in turn enables it to increasingly focus resources on the applications and workloads that are the primary concern of the business. Integrations with Azure identity and billing systems enable a seamless Azure-native experience for existing customers, allowing them to harness HashiCorp’s operational expertise without adding any additional administrative complexity. These advantages apply to single region VM-based Azure environments in need of basic Service Discovery as well as more complex multi-environment scenarios that require Service Mesh-related features like dynamic traffic routing and service segmentation.

Consul’s Mesh Gateway feature can be particularly beneficial to users that are running multiple Kubernetes or AKS environments, enabling multi-cluster service discovery and request routing. Mesh Gateway enables secure traffic routing across environments based on the service-level identity rather the IP address. This effectively flattens the network and renders the per-environment IP address management strategy irrelevant. This pattern applies equally to any mix of VM, hybrid/on-premises, and Kubernetes environments. Kubernetes-based deployments also benefit from Consul’s support for the Microsoft Service Mesh Interface, which enables a user to define Consul Connect intentions in a custom Kubernetes resource that can be directly managed with kubectl or Helm.

Next Steps

HCS on Azure is currently in private beta. If you are interested in participating in the private beta, please contact your HashiCorp account representative for more information. To sign up for status updates and to be notified as HCS develops, please visit the HCS on Azure landing page. If you are new to HashiCorp Consul, please visit the Consul Learn Documentation to get started!

from Hashicorp Blog

Announcing HashiCorp Nomad 0.10 Beta

Announcing HashiCorp Nomad 0.10 Beta

We are pleased to announce the availability of a beta release for HashiCorp Nomad 0.10.

Nomad is an easy-to-use, flexible, and performant workload orchestrator that deploys containers and legacy applications. Nomad is widely adopted and used in production by PagerDuty, Target, Citadel, Trivago, Pandora, and more.

Nomad 0.10 introduces advanced networking and storage features that enhance our support for stateful workloads and sidecar applications. The major new features in Nomad 0.10 include:
Consul Connect: seamless deployments of sidecar applications with secured service-to-service communication and bridge networking
Network Namespaces: secure intra-task communication over loopback interface for tasks within a group
Host Volumes: expanded support of stateful workloads through locally mounted storage volumes
UI Allocation File Explorer: enhanced operability with a visual file system explorer for allocations

Consul Connect

Nomad 0.10 enables easy, seamless deployments of sidecar applications and segmented microservices through Consul Connect.

“`hcl
[
service {
name = "count-dashboard"
port = "9002"

   connect {
     sidecar_service {
       proxy {
         upstreams {
           destination_name = "count-api"
           local_bind_port = 8080
         }
       }
     }
   }
 }

]
“`

Consul Connect introduces new connect and sidecar_service stanzas for service jobs. Nomad will automatically launch and manage an Envoy sidecar proxy alongside the application in the job file. This Envoy service is registered with Consul and is used to provide mTLS communications with other applications within the Nomad cluster. Sidecar configuration is available through the new upstreams stanza and its destination_name and local_bind_port parameters.

See the Consul Connect Integration guide for more details.

Network Namespaces

The network stanza is now configurable with three modes: bridge, host, and none. Bridge mode enables secure intra-task communication over the loopback interface. Tasks within a task group can now share a networking stack (interfaces, routes, firewall) that is isolated from the host.

Host Volumes

Nomad 0.10 expands support for running stateful workloads with Host Volumes.

hcl
host_volume "mysql_hv" {
path = "/opt/mysql/data"
read_only = false
}

External storage volumes attached at the node level can now be made available to Nomad with the new host_volume stanza in the client configuration. Read/write access can additionally be set through the read_only parameter.

“`hcl
job "myapp" {

group "db" {
count = 1

volume "mysql_vol" {
  type = "host"

  config {
    source = "mysql_hv"
  }
}

restart { ... }

task "mysql" {
  driver = "docker"

  volume_mount {
    volume      = "mysql_vol"
    destination = "/var/lib/mysql"
  }

  config { ... }

  resources { ... }

  service { ... }
}

}
}
“`

The new volume and volume_mount stanzas in the job file enable users to mount storage volumes into the applications that require them. Nomad will automatically ensure that the specified storage volume is available and mounted into the job at the task or group level, depending on the required granularity.

See the Host Volumes guide for more details.

Allocation File Explorer

Nomad 0.10 brings the ability to browse the file system of an allocation to the Web UI, including streaming files and viewing images.

Files in an allocation’s directory are now visualized in the Nomad Web UI, which enables faster, intuitive debugging at the file system level. Historically, the file system was accessible in Nomad through only the command line via nomad alloc fs. The Allocation File Explorer is especially effective for users who write Nomad logs directly to files instead of the default stdin/stdout.

Images will be automatically rendered and files will be streamed when applicable. Nomad’s Allocation File Explorer will be familiar to Consul or Vault users, which have similar visualizations for key values and secrets, respectively.

Note: We encourage you to experiment with these new features, but recommend against using this build in a production environment. Depending on feedback and resolving outstanding issues, we may release further betas, release candidates, and will publish a final generally available Nomad 0.10 release when appropriate.

Thank you to our active community members who have been invaluable in reporting bugs and improving the documentation for Nomad in this release!

from Hashicorp Blog

Celebrating Our HashiCorp User Group Community – 25,000 and Growing!

Celebrating Our HashiCorp User Group Community – 25,000 and Growing!

We started the HashiCorp User Group (HUG) program with the goal of bringing together members of our community to educate each other on our tools, best practices, emerging patterns, and to build a stronger sense of community. Along the way we have been lending a hand and supporting our dedicated local organizers to curate in-person experiences for their chapters.

Since the first HUG in 2015, there has been a huge growth of members and enthusiasm from around the world. We finished that first year with about 350 members. Today, we are excited to announce that the HUGs are 25,000 members strong.

The HUG program now includes members in 113 cities across 44 countries. In the past year, our organizers have hosted 220 meetups and presented in 106 cities.

Thank you to all of our local volunteers who lead the HUGs, we could not do this without you. We continue to support our organizers as they pilot new styles of Meetups, bring us ideas such as HashiTalks, and participate in our HashiDays and HashiConf user and technology conferences around the world as speakers and attendees.

Community is the heart of our company culture and our HUGs are part of the in-person story and experience that our practitioners value. We are pleased to watch our HUG program grow and meet so many of our passionate users along the way. We continue to be amazed by the dedication, knowledge sharing, and, most importantly, the kindness of our community.

Community brings us together and enables us all to learn from each other while pushing the industry ahead through collaboration and innovation. We are excited to watch our HUG community continue to grow, diversify, and strengthen across all borders.

Get involved

Check out our global HUG program and join your local chapter. Don’t see one? We would love to hear from you!

In addition, you can stay connected with HashiCorp, our products, and the larger community through our Community Forum. We look forward to connecting with you there!

from Hashicorp Blog

Terraform Learning Resources: Getting Started with Sentinel in Terraform Cloud

Terraform Learning Resources: Getting Started with Sentinel in Terraform Cloud

The Sentinel governance feature in Terraform Cloud allows you to enable logic-based policy decisions and enforce best practices in your organization. We are excited to announce a new Sentinel Getting Started track on HashiCorp Learn to help you use Sentinel in your Terraform Cloud workflow.

An introduction to Sentinel with Terraform Cloud

Sentinel is a tool for preventing mistakes and placing guardrails around operations in your organization. Without it, you may find that accidental charges for large EC2 Instances, improperly configured Security Groups, or under-utilzed resources are harder to track and prevent.

An example of a standard TFC workflow without Sentinel

Without Sentinel, it is the job of the operator to ensure their resource configuration adheres to the organizations standards.

A workflow with Sentinel

With Sentinel in Terraform Cloud, the operator will not be allowed to create resources that deviate from the defined parameters of your organization's Sentinel policy. If you would like to learn how to get started with Sentinel in Terraform Cloud, the HashiCorp Learn platform now has a Sentinel Getting Started track with hands-on guides for implementing Policy-As-Code in your organization.

What You'll Learn

The Sentinel Getting Started track on the Learn platform will teach new users:

  • Policy vocabulary
  • How to build policies
  • How to create policy sets
  • Mocking and testing policies with the Sentinel Simulator
  • How to use the Terraform Sentinel Provider

The Sentinel Simulator is featured heavily to run tests and mock data, so be sure to download it here.

For an example of how the Sentinel Simulator works, let's start by looking at a real Sentinel policy:

hcl
hour = 4
main = rule { hour >= and hour < 12 }

This first line of this example declares a variable named hour with the value 4. The second line declares a rule that will return true if hour is between 0 and 12.

This policy can be applied using Sentinel Simulator to determine whether this policy passed or failed. Save this file as policy.sentinel and run the Sentinel Simulator against it.

shell
$ sentinel apply policy.sentinel

You should receive an output of PASS from this command. Check out the guide to find out why!

New Sentinel Features

For those familiar with Sentinel, the Governance team is excited to announce that managing policies is even easier in Sentinel with VCS integrated Policy Sets.

Instead of managing single policies one by one, Sentinel now allows organizations to manage policies in VCS repositories and instantly enforce them across as many Terraform Cloud workspaces as necessary. To learn more about this new feature, visit the HashiCorp Learn platform to see it in action.

from Hashicorp Blog

What’s Next for Vault and Kubernetes

What’s Next for Vault and Kubernetes

We're excited to announce multiple features that deeply integrate HashiCorp Vault with Kubernetes. This post will share the initial set of features that will be released in the coming months. The initial roadmap will focus on features that allow users with limited or no knowledge to make use of secret management capabilities including storing and accessing secrets without additional customization.

Our goal is to give a variety of options around how you can leverage Vault and Kubernetes to securely introduce secrets into your application stack. We are doing this because there is a spectrum of security and implementation details that come with each option, be that injecting secrets into a pod, secret synchronization, etc. So, we want to give you several options to choose from that best fits your applications and use cases.

Approaching Security with Vault and Kubernetes

The most secure way to interact with Vault is for an application to directly integrate with the Vault API. Within Kubernetes, this would mean the application uses the Kubernetes service account to authenticate with Vault. However, this requires the application is written or rewritten with Vault awareness and this is not realistic for many workloads.

The features we're announcing below enable more automatic access to secrets within the context of Kubernetes. Some of these features will require more careful securing of Kubernetes itself or acceptance of windows of risk where secrets are copied through Kubernetes.

Features

The following is the list of features that are being worked on and released in the coming months. Follow-on announcement blog posts will cover each in detail, and each item will be updated to link to that announcement post.

  • Helm Chart: By using the Helm chart, you can greatly reduce the complexity of running Vault on Kubernetes, and it gives you a repeatable deployment process in less time (vs rolling your own). This feature has been released and initially supports installing and updating open-source Vault on Kubernetes in three distinct modes: single-server, highly-available, and dev mode.

  • Injecting Vault secrets into Pods via a sidecar: To enable access to Vault secrets by applications that don’t have native Vault logic built-in, this feature will allow automatic injection of secrets into the pod file system for static and dynamic secrets. This will allow applications to only concern themselves with finding a secret at a filesystem path, rather than managing the auth tokens and other mechanisms for direct interaction with Vault. This feature will be made available as an option through our Helm chart.

Feedback Requested

We’re excited to discuss several feature ideas with the community to gather feedback, build support for use cases, and work towards a future release for:

  • Syncer Process: We are exploring the use case of integrating Vault with the Kubernetes Secrets mechanism via a syncer process. This syncer could be used to periodically sync a subset of Vault secrets with Kubernetes so that secrets are always up-to-date for users without directly interacting with Vault. If you would like to provide feedback around this feature please comment on this GitHub issue.

  • Container Storage Interface (CSI) plugin: Similar to the Vault sidecar feature (discussed above), the CSI feature will expose secrets on a volume within a pod. This will enable the injection of secrets into a running pod using a CSI plugin. If you would like to provide feedback around this feature please comment on this GitHub issue.

If you're passionate about Kubernetes, our tools, and improving those integrations, please join us! We have a few roles open for ecosystem engineers and product managers to work on Kubernetes integrations.

from Hashicorp Blog