I really like Kubernetes; I’ve been following it almost since its inception five years ago and used it successfully in the past three years in several projects. It isn’t without challenges (especially around managing state) but it’s definitely getting better with each release. Moving to a new company, it is no wonder I introduced Kubernetes into our architecture from the get-go.

One thing that’s different in this project though, is that I am now running Kubernetes in the cloud (AWS for now), whereas in the last two times the target use was on-prem. Using Kubernetes in the cloud alleviated some of the pains we had to deal with self-hosting it, like installation and control plane availability, but it also introduces some new challenges — one such challenge is around security.

There are many ways to protect the inter-pod/service communications inside Kubernetes (maybe I’ll dedicate another post for that) — the problem here is different. It is controlling the security from the different pods and other AWS assets (like RDS, S3, other EKS clusters, etc.).

One way to handle security in AWS is to associate an AWS role with an instance. That works well in the “classic” AWS setup since different instances (or groups of instances) host different services. This is not the case when using Kubernetes, now we have multiple types of service (internal and external) running on the same node — if the node has the maximal security, we’re not only violating the “least privileges principle” for our own services, we’re probably also exposing our AWS resources to third-party pods we’re running on the same nodes.

It seems that one possible solution is to set the node permissions to something minimal and to AWS key pairs for each service. This has two problems — that the nodes need some privileges to be part of the Kubernetes cluster, and also it is a major headache to store and distribute the key pairs in a secure manner (making sure they don’t end up hard-coded in source code or propagated to pods as plain text, for example).

Luckily, there’s a better approach that brings the IAM-based approach from regular EC2 instances to the pod level. There are a couple of tools, that I know of, that support this, namely kube2iam and kiam (you can read a nice comparison of the two here).

Though they work a little differently, they are both based on the same approach:

  • You set nodes with permission to assume (some) roles
  • You configure permission for pods by using annotations on the Kubernetes deployment yaml
  • The tool proxies and intercept calls to AWS EC2 metadata API and provides temporary credentials by assuming the role in the annotation

And presto — your pod only has the privileges it was configured with.

To get this magic going you need to do three things:

  • Set up permissions (roles and policies)
  • Install kube2iam
  • Annotate your pod deployments

Set Up Permissions

The biggest problem for me was setting the permissions right. To do that you:

1. Add a new policy with sts:assume permission:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::your-account-id:role/prefix*" } ] }

Note that you can (and should) limit the resources that services can assume by specifying a resource prefix.

2. Find what role is used by the worker nodes in your cluster and add to it the policy from step 1.

3. For each role that you define and want pods to use you also need to add a trust relationship that allows the worker nodes’ role to assume it.

{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Effect": "Allow",
       "Principal": {
         "Service": "ec2.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
     },
     {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "AWS": "arn:aws:iam::your-account-id:role/worker-node-role }, "Action": "sts:AssumeRole" } ] }

Installing kube2iam

The kube2iam site has instructions on installing it, but I found it was easier to install it using Helm:

helm install stable/kube2iam --name dev-kube2iam --namespace kube-system -f ./kube2iam.config.yaml

where the config file is:

aws:
  region: "your-aws-region"

extraArgs:
  auto-discover-base-arn: true
  auto-discover-default-role: true

# Won't work with Calico host: iptables: true interface: eni+ rbac: create: true

Setting Roles for Helm Charts

Lastly, you need to annotate your deployments with iam.amazonaws.com/role:

apiVersion: v1kind: Podmetadata: name: aws-clilabels: name: aws-cliannotations:  iam.amazonaws.com/role: role-arnspec: containers: - image: fstab/aws-clicommand: - "/home/aws/aws/env/bin/aws" - "s3" - "ls" - "some-bucket" name: aws-cli

Note that therole-arn is the suffix for the arn since kube2iam is configured (or automatically picks up, as above) with the arn prefix.

This works well if you’re deploying your services with kubectl. We are using Helm, though, and so to set the annotation with Helm you need to set up the annotation in the pod template:

spec:
  replicas:   template:     metadata:       labels:         app:       annotations:         iam.amazonaws.com/role: spec:       containers:         - name:           image: ":"           imagePullPolicy:           envFrom:           - configMapRef:               name:-configmap           ports:[ containerPort: 8000 ]

from DZone Cloud Zone