04 April 2019

Introduction

This post explains how to deploy a Kubernetes cluster in Amazon. We want to automatically update Route 53 to use our own domain and use AWS ELB to have Load Balancing to our pods. We'll use also AWS Certificate Manager (ACM) so our pods open internally HTTP endpoints but externally they expose HTTPS with a proper certificate.

Installation

Install awscli and kops.

export bucket_name=test-kops
export KOPS_CLUSTER_NAME=k8s.test.net
export KOPS_STATE_STORE=s3://${bucket_name}

aws s3api create-bucket --bucket ${bucket_name} --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled

kops create cluster \
--node-count=1 \
--node-size=t2.medium \
--zones=eu-west-1a \
--dns-zone test.net \
--cloud-labels="Department=TEST" \
--name=${KOPS_CLUSTER_NAME}

kops edit cluster --name ${KOPS_CLUSTER_NAME}

Add to the end:

  additionalPolicies:
     node: |
       [
           {
               "Effect": "Allow",
               "Action": "route53:ChangeResourceRecordSets",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "route53:ListHostedZones",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "route53:ListResourceRecordSets",
               "Resource": "*"
           }
       ]

and create the cluster executing:

kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
kops rolling-update cluster

It takes some time. Use kops validate cluster to validate it. More options:

Deploy the dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
kops get secrets kube --type secret -oplaintext

Open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Click on Token and introduce the following output:

kops get secrets admin --type secret -oplaintext

Configure DNS

Note: avoid route53-mapper, it's deprecated. The kops documentation is outdated.

Obtain the zone ID for your Hosted Zone (you should create a new one if you don't have one, consult here how to do it):

aws route53 list-hosted-zones-by-name --output json --dns-name "test.net." | jq -r '.HostedZones[0].Id'

In our case, it returns /hostedzone/AAAAAA.

Create a new file external-dns.yml and update your data in the end:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions"] 
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.opensource.zalan.do/teapot/external-dns:latest
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=test.net 
        - --provider=aws
          #- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
        - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
        - --registry=txt
        - --txt-owner-id=AAAAAA

and deploy it:

kubectl apply -f external-dns.yml

Test your configuration with an example:

Create an AWS certificate for the service:

aws acm request-certificate \
--domain-name nginx.test.net \
--validation-method DNS \
--idempotency-token 1234 

and save the CertificateArn. We'll use it later.

You will need to validate it. The easier way it's from the AWS web console as explained in the official documentation.

Create nginx-d.yml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
          name: http

and nginx-svc.yml with the domain you would like to use and the ACM certificate.

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    external-dns.alpha.kubernetes.io/hostname: nginx.test.net.
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:888888:certificate/AAAAAA-BBBBB-CCCCC-DDDDD
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
    targetPort: 80
  - name: https
    port: 443
    targetPort: http
  selector:
    app: nginx

and deploy them:

kubectl apply -f nginx-d.yml -d nginx-svc-yml

It would take some minutes. Once the pods are ready, you should be able to open in your browser on http://nginx.test.net and https://nginx.test.net and see the nginx welcome page.

Clean everything

Delete the ACM certificate and execute:

kops delete cluster --name k8s.test.net --yes

Resources