Quantcast
Channel: Containers Archives - CormacHogan.com
Viewing all articles
Browse latest Browse all 78

Kubernetes on vSphere 101 – Ingress

$
0
0

As I was researching content for the 101 series, I came across the concept of an Ingress. As I hadn’t come across it before, I wanted to do a little more research on what it actually did. It seems that in some ways, they achieve the same function as a Load Balancer in so far as they provide a mean of allowing external traffic into your cluster. But they are significantly different in how they do this. If we take the Load Balancer service type first, then for every service that is exposed via a Load Balancer, a unique external IP address needs to be assigned to each service. Ingress, on the other hands, is not a service.  It behaves as a sort of entry point to your cluster, using a single IP address, sitting in front of multiple services. The request can be ‘routed’ to the appropriate service, based on how the request is made. The most common example of where ingress is used is with web servers. For example, I may run an online store, where different services are offered, e.g. search for an item, add item to basket, display basket contents, etc. Depending on the URL, I can redirect that request to a different service at the back-end, all from the same web site/URL. So cormachogan.com/add-to-basket could be directed to the ‘add-to-basket’ service backed by a set of Pods running a particular service, whilst cormachogan.com/search could be redirected to a different service backed by a different set of Pods.

To summarize, how this differs from a Load Balancer service is that a Load Balancer distributes requests across back-end Pods of the same type offering a service, consuming a unique external IP address per service. Whereas ingress will route requests to a specific back-end service (based on a URL, for example) when there are multiple different services available in the back-end. As mentioned, one typically comes across ingress when you have multiple services exposed via the same IP address, and all these services uses the same layer 7 protocol, which more often than not is HTTP.

Be aware that an Ingress object does nothing by itself; it requires an ingress controller to operate. Having said that, the other thing that my research introduced me to was Contour. Contour is an Ingress Controller which VMware acquired along with the Heptio acquisition. It works by deploying Envoy, an open source edge and service proxy. What is neat about Contour is that supports dynamic configuration updates. I thought it might be interesting to try to use Contour and Envoy to create my own ingress for something home grown in the lab so as to demonstrate Ingress.

Deploy Contour

The roll-out of Contour is very straight-forward. The team have created a single manifest/YAML file with all of the necessary object definitions included (see the first command below for the path). This creates a new contour namespace, creates the service and service accounts, the Custom Resource Definitions, and everything else that is required. Contour is rolled out as a deployment with 2 Replicas. Each Replica Pod contains both an Envoy container and a Contour container. Let’s roll it out and take a look. FYI, I am deploying this on my PKS 1.3 environment, which has NSX-T for the CNI and Harbor for its image repository. There is a lot of output here, but it should give you an idea as to how Contour and Envoy all hang together.

$ kubectl apply -f https://j.hept.io/contour-deployment-rbac
namespace/heptio-contour created
serviceaccount/contour created
customresourcedefinition.apiextensions.k8s.io/ingressroutes.contour.heptio.com created
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.contour.heptio.com created
deployment.apps/contour created
clusterrolebinding.rbac.authorization.k8s.io/contour created
clusterrole.rbac.authorization.k8s.io/contour created
service/contour created

$ kubectl change-ns heptio-contour
namespace changed to "heptio-contour"

$ kubectl get svc
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)                      AGE
contour   LoadBalancer   10.100.200.184   100.64.0.1,192.168.191.70   80:31042/TCP,443:31497/TCP   35s

$ kubectl get crd
NAME                                           CREATED AT
clustersinks.apps.pivotal.io                   2019-06-25T11:35:17Z
ingressroutes.contour.heptio.com               2019-07-10T08:46:15Z
sinks.apps.pivotal.io                          2019-06-25T11:35:17Z
tlscertificatedelegations.contour.heptio.com   2019-07-10T08:46:15Z

$ kubectl get deploy
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
contour   2         2         2            0           56s

$ kubectl get clusterrole | grep contour
contour

$ kubectl describe clusterrole contour
Name:         contour
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRole","metadata":{"annotations":{},"name":"contour"},"rules":[{"apiGroups...
PolicyRule:
  Resources                                     Non-Resource URLs  Resource Names  Verbs
  ---------                                     -----------------  --------------  -----
  ingressroutes.contour.heptio.com              []                 []              [get list watch put post patch]
  tlscertificatedelegations.contour.heptio.com  []                 []              [get list watch put post patch]
  services                                      []                 []              [get list watch]
  ingresses.extensions                          []                 []              [get list watch]
  nodes                                         []                 []              [list watch get]
  configmaps                                    []                 []              [list watch]
  endpoints                                     []                 []              [list watch]
  pods                                          []                 []              [list watch]
  secrets                                       []                 []              [list watch]

$ kubectl describe clusterrolebinding contour
Name:         contour
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"contour"},"roleRef":{"a...
Role:
  Kind:  ClusterRole
  Name:  contour
Subjects:
  Kind            Name     Namespace
  ----            ----     ---------
  ServiceAccount  contour  heptio-contour

$ kubectl get replicasets
NAME                 DESIRED   CURRENT   READY   AGE
contour-5cd6986479   2         2         0       4m48s

$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
contour-864d797fc6-t8tb9   2/2     Running   0          32s
contour-864d797fc6-z8x4m   2/2     Running   0          32s

$ kubectl describe pod contour-864d797fc6-t8tb9
Name:               contour-864d797fc6-t8tb9
Namespace:          heptio-contour
Priority:           0
PriorityClassName:  <none>
Node:               6ac7f51f-af3f-4b55-8f47-6449a8a7c365/192.168.192.5
Start Time:         Wed, 10 Jul 2019 10:02:40 +0100
Labels:             app=contour
                    pod-template-hash=864d797fc6
Annotations:        prometheus.io/path: /stats/prometheus
                    prometheus.io/port: 8002
                    prometheus.io/scrape: true
Status:             Running
IP:                 172.16.7.2
Controlled By:      ReplicaSet/contour-864d797fc6
Init Containers:
  envoy-initconfig:
    Container ID:  docker://ecc58b57d4ae0329368729d5ae5ae76ac143809090c659256ceceb74c192d2e9
    Image:         harbor.rainpole.com/library/contour:master
    Image ID:      docker-pullable://harbor.rainpole.com/library/contour@sha256:b3c8a2028b9224ad1e418fd6dd70a68ffa62ab98f67b8d0754f12686a9253e2a
    Port:          <none>
    Host Port:     <none>
    Command:
      contour
    Args:
      bootstrap
      /config/contour.json
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 10 Jul 2019 10:02:43 +0100
      Finished:     Wed, 10 Jul 2019 10:02:44 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      CONTOUR_NAMESPACE:  heptio-contour (v1:metadata.namespace)
    Mounts:
      /config from contour-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from contour-token-jctk2 (ro)
Containers:
  contour:
    Container ID:  docker://b2c3764379b26221670ce5953b3cd2e11c90eb80bdd04d31d422f98ed3d4486d
    Image:         harbor.rainpole.com/library/contour:master
    Image ID:      docker-pullable://harbor.rainpole.com/library/contour@sha256:b3c8a2028b9224ad1e418fd6dd70a68ffa62ab98f67b8d0754f12686a9253e2a
    Port:          <none>
    Host Port:     <none>
    Command:
      contour
    Args:
      serve
      --incluster
      --envoy-service-http-port
      8080
      --envoy-service-https-port
      8443
    State:          Running
      Started:      Wed, 10 Jul 2019 10:02:45 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8000/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8000/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from contour-token-jctk2 (ro)
  envoy:
    Container ID:  docker://3a7859992c88d29ba4b9a347f817919a302b50e5352e988ca934caad3e0ea934
    Image:         harbor.rainpole.com/library/envoy:v1.10.0
    Image ID:      docker-pullable://harbor.rainpole.com/library/envoy@sha256:bf7970f469c3d2cd54a472536342bd50df0ddf099ebd51024b7f13016c4ee3c4
    Ports:         8080/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      envoy
    Args:
      --config-path /config/contour.json
      --service-cluster cluster0
      --service-node node0
      --log-level info
    State:          Running
      Started:      Wed, 10 Jul 2019 10:02:52 +0100
    Ready:          True
    Restart Count:  0
    Readiness:      http-get http://:8002/healthz delay=3s timeout=1s period=3s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /config from contour-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from contour-token-jctk2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  contour-config:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  contour-token-jctk2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  contour-token-jctk2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                           Message
  ----    ------     ----  ----                                           -------
  Normal  Scheduled  43s   default-scheduler                              Successfully assigned heptio-contour/contour-864d797fc6-t8tb9 to 6ac7f51f-af3f-4b55-8f47-6449a8a7c365
  Normal  Pulling    42s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  pulling image "harbor.rainpole.com/library/contour:master"
  Normal  Pulled     41s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Successfully pulled image "harbor.rainpole.com/library/contour:master"
  Normal  Created    41s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Created container
  Normal  Started    41s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Started container
  Normal  Pulling    40s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  pulling image "harbor.rainpole.com/library/contour:master"
  Normal  Created    40s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Created container
  Normal  Pulled     40s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Successfully pulled image "harbor.rainpole.com/library/contour:master"
  Normal  Started    39s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Started container
  Normal  Pulling    39s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  pulling image "harbor.rainpole.com/library/envoy:v1.10.0"
  Normal  Pulled     33s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Successfully pulled image "harbor.rainpole.com/library/envoy:v1.10.0"
  Normal  Created    32s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Created container
  Normal  Started    32s   kubelet, 6ac7f51f-af3f-4b55-8f47-6449a8a7c365  Started container

The output from the other Pod is pretty much identical to this one. Now we need to figure out an application (or applications) that can sit behind this ingress. I’m thinking of using some simple Nginx web server deployments, whereby on receipt of a request to access /index-a, I am redirected to the ‘a’ service,  and hit the index page on Pod ‘a’, Similarly, on receipt of a request to access /index-b, I am redirected to the ‘b’ service,  and hit the index page on Pod ‘b’. For this, I am going to build some new docker images for my Pod containers, so we can easily tell which services/Pods we are landing on, a or b.

Create some bespoke Nginx images

I mentioned already that I am using Harbor for my registry. What I will show in this section is how to pull down an Nginx image, modify it, then commit those changes and store the updated image in Harbor. The end goal here is that when I connect to a particular path on the web server, I want to show which server I am landing on, either A or B. First, I will change the index.html contents to something that identifies it as A or B, and then rename it to something that we can reference in the ingress manifest later on, either index_a.html or index_b.html, depending on which Pod it is deployed to. If you are not using Harbor, then you can just keep the changed images locally and reference then from there in your manifests.

$ sudo docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
harbor.rainpole.com/library/contour       master              adc3f7fbe3b4        30 hours ago        41.9MB
nginx                                     latest              f68d6e55e065        9 days ago          109MB
harbor.rainpole.com/library/mysql         5.7                 a1aa4f76fab9        4 weeks ago         373MB
harbor.rainpole.com/library/mysql         latest              c7109f74d339        4 weeks ago         443MB
harbor.rainpole.com/library/envoy         v1.10.0             20b550751ccf        3 months ago        164MB
harbor.rainpole.com/library/kuard-amd64   1                   81086a8c218b        5 months ago        19.7MB
harbor.rainpole.com/library/cadvisor      v0.31.0             a38f1319a420        10 months ago       73.8MB
harbor.rainpole.com/library/cassandra     v11                 11aad67b47d9        2 years ago         384MB
harbor.rainpole.com/library/xtrabackup    1.0                 c415dbd7af07        2 years ago         265MB
harbor.rainpole.com/library/volume-nfs    0.8                 ab7049d62a53        2 years ago         247MB
harbor.rainpole.com/library/busybox       latest              e7d168d7db45        4 years ago         2.43MB

$ sudo docker create nginx
18b1a4cea9568972241bda78d711dd71cb4afe1ea8be849b826f8c232429040e

$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
18b1a4cea956        nginx               "nginx -g 'daemon of…"   10 seconds ago      Created                                 focused_swanson

$ sudo docker start 18b1a4cea956
18b1a4cea956

$ sudo docker exec -it 18b1a4cea956 bash
root@18b1a4cea956:/# cd /usr/share/nginx/html/
root@18b1a4cea956:/usr/share/nginx/html# cp index.html orig-index.html
root@18b1a4cea956:/usr/share/nginx/html# grep Welcome index.html
<title>Welcome to nginx!</title>
<h1>Welcome to nginx!</h1>

root@18b1a4cea956:/usr/share/nginx/html# sed 's/Welcome to nginx/Welcome to nginx - redirected to A/' orig-index.html > index-a.html

root@18b1a4cea956:/usr/share/nginx/html# grep Welcome index-a.html
<title>Welcome to nginx - redirected to A!</title>
<h1>Welcome to nginx - redirected to A!</h1>

root@18b1a4cea956:/usr/share/nginx/html# rm orig-index.html
root@18b1a4cea956:/usr/share/nginx/html# exit
exit
$

Now that the changes are made, let’s commit it to a new image and push it out to Harbor.

$ sudo docker commit 18b1a4cea956 nginx-a
sha256:6c15ef2087abd7065ce79ca703c7b902ac8ca4a2235d660b58ed51688b7b0164

$ sudo docker tag nginx-a:latest harbor.rainpole.com/library/nginx-a:latest
$ sudo docker push  harbor.rainpole.com/library/nginx-a:latest
The push refers to repository [harbor.rainpole.com/library/nginx-a]
dcce4746f5e6: Pushed
d2f0b6dea592: Layer already exists
197c666de9dd: Layer already exists
cf5b3c6798f7: Layer already exists
latest: digest: sha256:a9ade6ea857b991d34713f1b6c72fd4d75ef1f53dec4eea94e8f61adb5192284 size: 1155
$

Now we need to repeat the process for the other image. We can use the same image as previously, make new changes and commit the new changes.

$ sudo docker exec -it 18b1a4cea956 bash
root@18b1a4cea956:/# cd /usr/share/nginx/html/
root@18b1a4cea956:/usr/share/nginx/html# ls
50x.html  index-a.html
root@18b1a4cea956:/usr/share/nginx/html# grep Welcome index-a.html
<title>Welcome to nginx - redirected to A!</title>
<h1>Welcome to nginx - redirected to A!</h1>

root@18b1a4cea956:/usr/share/nginx/html# sed s'/redirected to A/redirected to B/' index-a.html > index-b.html

root@18b1a4cea956:/usr/share/nginx/html# grep Welcome index-b.html
<title>Welcome to nginx - redirected to B!</title>
<h1>Welcome to nginx - redirected to B!</h1>

root@18b1a4cea956:/usr/share/nginx/html# rm orig-index.html
root@18b1a4cea956:/usr/share/nginx/html# exit
exit

$ sudo docker commit 18b1a4cea956 nginx-b
sha256:29dc4c9bf09c18989781eb56efdbf15f5f44584a7010477a4bd58f5acaf468bd

$ sudo docker tag nginx-b:latest harbor.rainpole.com/library/nginx-b:latest
$ sudo docker push  harbor.rainpole.com/library/nginx-b:latest
The push refers to repository [harbor.rainpole.com/library/nginx-b]
6d67f88e5fa0: Pushed
d2f0b6dea592: Layer already exists
197c666de9dd: Layer already exists
cf5b3c6798f7: Layer already exists
latest: digest: sha256:11e4af3647abeb23358c92037a4c3287bf7b0b231fcd6f06437d75c9367d793f size: 1155
$

Deploy our ingress based application

Excellent. At this point, we now have two different images – one that we can use when a request is received for service A, and the other that we use when a request is received for service B. Let’s now take a look at the manifest files for the nginx application. The deployment and the service should be very straight forward to understand at this point. You can review the 101 deployments post and 101 service posts if you need more details. We will talk about the ingress manifest in more detail though. Let’s begin with the manifests for the deployments. As you can see, these are being deployed initially with a single ReplicaSet but can be scaled out if needed. Note also the image is using nginx-a for deployment ‘a’ and nginx-b for deployment ‘b’.

$ cat nginx-a-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-a-deployment
spec:
  selector:
    matchLabels:
      app: nginx-a
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-a
    spec:
      containers:
      - name: nginx-a
        image: harbor.rainpole.com/library/nginx-a:latest
        ports:
        - containerPort: 80
$ cat nginx-b-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-b-deployment
spec:
  selector:
    matchLabels:
      app: nginx-b
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-b
    spec:
      containers:
      - name: nginx-b
        image: harbor.rainpole.com/library/nginx-b:latest
        ports:
        - containerPort: 80
Next up are the services, one for each deployment. Once again, these are quite straight-forward, but with ClusterIP, there is no external access. These are simply tagged to the different deployments using the selector.label.
$ cat nginx-a-svc.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-a
  name: nginx-a
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx-a
  sessionAffinity: None
  type: ClusterIP
$ cat nginx-b-svc.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-b
  name: nginx-b
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx-b
  sessionAffinity: None
  type: ClusterIP
This brings us to the ingress manifest. Let’s take a look at that next.
$ cat nginx-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
spec:
  rules:
  - host: nginx.rainpole.com
    http:
      paths:
      - path: /index-a.html # This page must exist on the A server
        backend:
          serviceName: nginx-a
          servicePort: 80
      - path: /index-b.html # This page must exist on the B server
        backend:
          serviceName: nginx-b
          servicePort: 80
The rules section of the manifest is lookng at the path, and redirecting the request to the appropriate service. The idea here is that an end-user connects to the nginx.rainpole.com (the DNS name of the IP address that will be provided for the Ingress), and depending on whether the full URL is nginx.rainpole.com/index-a.html or nginx.rainpole.com/index-b.html, the request will be routed to the appropriate service, and this Pod/Container/Application. I should then see the correct (A or B) index that we modified in our bespoke images earlier. Otherwise you should not get anything when connecting to the URL.An easy way to visualize the setup is as follows:
Once the application has been deployed, you should see Pods, services and Ingress created, similar to the following (the output below includes the Contour objects as well):
$ kubectl get po,svc,ing
NAME                                      READY   STATUS    RESTARTS   AGE
pod/contour-864d797fc6-nqggp              2/2     Running   0          25h
pod/contour-864d797fc6-wk6vx              2/2     Running   0          25h
pod/nginx-a-deployment-6f6c8df5d6-k8lkz   1/1     Running   0          39m
pod/nginx-b-deployment-c66476f97-wl84p    1/1     Running   0          39m

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)                      AGE
service/contour   LoadBalancer   10.100.200.227   100.64.0.1,192.168.191.71   80:31910/TCP,443:31527/TCP   29h
service/nginx-a   ClusterIP      10.100.200.30    <none>                      80/TCP                       39m
service/nginx-b   ClusterIP      10.100.200.195   <none>                      80/TCP                       39m

NAME                       HOSTS                ADDRESS                     PORTS   AGE
ingress.extensions/nginx   nginx.rainpole.com   100.64.0.1,192.168.191.60   80      39m

And now if I try to connect to the DNS name of the application (nginx.rainpole.com), let’s see what I get. First, let’s try to go to the main index.html. Typically we would see the Nginx landing page, but now we get the following:

But, if we choose either /index-a.html or /index-b.html, you will quickly see that we are redirected to the different services and applications at the back-end.

Very good. And that is essentially it. Hopefully you can see the benefits of using this approach over a Load Balancer, especially with the reduced number of IP addresses that need to be assigned. I was able to host both services on a single IP, and redirect to the correct service based on URL path. Otherwise I would have had to use two IP addresses, one for each service.

Now I have only scratched the surface of what Contour + Envoy can do. There is obviously a lot more features than the simple example I have used in this post. Read more about Contour here via this link to GitHub.

Manifests used in this demo can be found on my vsphere-storage-101 github repo.

The post Kubernetes on vSphere 101 – Ingress appeared first on CormacHogan.com.


Viewing all articles
Browse latest Browse all 78

Trending Articles