Quantcast
Channel: Containers Archives - CormacHogan.com
Viewing all articles
Browse latest Browse all 78

VMware Fusion 12 – vctl / KinD / MetalLB / Niginx deployment

$
0
0

A number of months back, I wrote an article which looked at how we now provide a Kubernetes in Docker (KinD) service in VMware Fusion 12. In a nutshell, this allows us to very quickly stand up a Kubernetes environment using the Nautilus Container Engine with a very lightweight virtual machine (CRX) based on VMware Photon OS. In this post, I wanted to extend the experience, and demonstrate how we can stand up a simple Nginx deployment. First, we will do a simple deployment.  Then we will extend it to use a Load Balancer service (leveraging MetalLB).

This post will not cover how to launch the Container Engine or KinD with Fusion, since these are both covered in the previous post. Instead we will focus on deploying an Nginx web server. First, let’s look at a sample deployment and service for the Nginx application. Here is a simple manifest, which describes 2 objects; a deployment with 2 replicas (Pods), and a service. These are linked through the use of spec.selector.matchLabels. There is a single container image which presents its web service via port 80.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      app: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    app: my-nginx

Assuming that I have once again used VMware Fusion to launch the Container Engine and KinD,  I can apply the above manifest via kubectl create or kubectl apply via my MacOS terminal. Next, I will look at what objects are created. I should see a deployment, two Pods, two endpoints and a service.

% kubectl apply -f nginx.yaml
deployment.apps/my-nginx created
service/my-nginx created


% kubectl get deploy my-nginx
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   2/2     2            2           25s


% kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
my-nginx-74b6849986-glqbs   1/1     Running   0          45s   10.244.0.13   kind-control-plane   <none>           <none>
my-nginx-74b6849986-r4vf4   1/1     Running   0          45s   10.244.0.14   kind-control-plane   <none>           <none>


% kubectl get endpoints my-nginx
NAME       ENDPOINTS                       AGE
my-nginx   10.244.0.13:80,10.244.0.14:80   37s


% kubectl get svc my-nginx
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
my-nginx   ClusterIP   10.97.25.54   <none>        80/TCP    47s

As we can see, the deployment and two Pods are up and running. What is interesting to observe is the networking configuration. The idea behind a deployment is that there can be multiple Pods to provide the service, in this case an Nginx web server. If one of the Pods fails, the other Pod continues to provide the functionality.

Each of the Pods gets its own IP addresses (e.g. 10.244.0.13, 10.244.0.14) from the Pod network range. These IP addresses are also assigned to the endpoints, which can be referenced by the service.

Similarly, the idea of creating a Service is to provide a “front-end” or “virtual” IP address from the Service network range to access the deployment (e.g. 10.97.25.54).  It gets its own unique IP address so that clients of the web server can avoid using the Pod IP/Endpoints. If clients use the Pod IP addresses, then they would lose connectivity to the application (e.g. web server) if that Pod failed. If connectivity is made via the Service, then there is no loss of connectivity if the Pod fails as the Service would redirect the connection to the other Pod IP address/Endpoint.

When a service is created, it typically gets (1) a virtual IP address, (2) a DNS entry and (3) networking rules that ‘proxy’ or redirects the network traffic to the Pod/Endpoint that actually provides the service. When that virtual IP address receives traffic, the traffic is redirected to the correct back-end Pod/Endpoint.

Let’s test the deployment, and see if we can verify that the web service is running. At present, there is no route from my MacOS to either the Pod network (10.244.0.0) or the service network (10.97.25.0). In order to reach them, I can add a static route to them using the IP address of the KinD node as the gateway. You can get the KinD node IP address by simply running a docker ps as shown below:

% docker ps
────                 ─────                                                                                  ───────                   ──               ─────            ──────    ─────────────
NAME                 IMAGE                                                                                  COMMAND                   IP               PORTS            STATUS    CREATION TIME
────                 ─────                                                                                  ───────                   ──               ─────            ──────    ─────────────
kind-control-plane   kindest/node@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600   /usr/local/bin/entry...   172.16.255.128   54785:6443/tcp   running   2021-02-10T12:43:03Z
Now that the IP address of the KinD node has been identified, we can use it as a gateway when adding routes to the Pod network and the Service network. We can then test that the web server is running by using curl to retrieve the index.html landing page, as follows:
% sudo route add -net 10.244.0.0 -gateway 172.16.255.128
Password: *****
add net 10.244.0.0


% curl 10.244.0.13:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


% curl 10.244.0.14:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

This looks good – we can get the Nginx web server landing page from both Pods. Let’s now check accessibility via the service. First, let’s remove the route to the Pods, and then add the route to the Service.

% sudo route delete -net 10.244.0.0
delete net 10.244.0.0


% sudo route add -net 10.97.25.0 -gateway 172.16.255.128
add net 10.97.25.0


% curl 10.97.25.54:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Excellent, everything appears to be working as expected. However, we would not normally allow external clients to access the ClusterIP directly as shown here. We would typically setup a Load Balancer service, which creates an EXTERNAL-IP. This is presently set to none, as per the service output seen earlier. We will configure the LoadBalancer using MetalLB.There are only a few steps needed. (1) Deploy the MetalLB namespace manifest, (2) deploy the MetalLB objects manifest and (3) create and deploy a ConfigMap with the range of Load Balancer / External IP addresses. Steps 1 and 2 are covered in the MetalLB Installation page. Item 3 is covered in the MetalLB Configuration Page. Below are the steps taken from my KinD setup. Note that the range of Load Balancer IP addresses that I chose are from 192.168.1.1 to 192.168.1.250 as per the ConfigMap.
% kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
namespace/metallb-system created


% kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created


% cat config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.1-192.168.1.250


% kubectl apply -f config.yaml
configmap/config created
%
Now there is only a single change need to my Nginx manifest, and that is to add the spec.type: LoadBalancer to the Service, highlighted in blue below:
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: my-ngin
  replicas: 2 
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  type: LoadBalancer
  ports:
    - port: 80
      protocol: TCP
  selector:
    app: my-nginx
Let’s again query the objects that were created from this manifest, and we should see that the Service now has both a ClusterIP and an External-IP populated. It should match the first IP address in the range provided in MetalLB’s ConfigMap, which it does (192.168.1.1):
% kubectl apply -f nginx.yaml
deployment.apps/my-nginx created
service/my-nginx created


% kubectl get deploy my-nginx
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   2/2     2            2           4s


% kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-74b6849986-z77ph   1/1     Running   0          10s
my-nginx-74b6849986-zlwdh   1/1     Running   0          10s


% kubectl get endpoints
NAME         ENDPOINTS                       AGE
kubernetes   172.16.255.128:6443             98m
my-nginx     10.244.0.18:80,10.244.0.19:80   16s


% kubectl get svc my-nginx
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
my-nginx   LoadBalancer   10.96.90.176   192.168.1.1   80:31374/TCP   27s
This is now the IP address that should be used by external clients to access the web service. However, as before, there is no route from my desktop to this network, so I need to add a static route, once again using the KinD node as the gateway.
% sudo route add -net 192.168.1.0 -gateway 172.16.255.128
add net 192.168.1.0

% curl 192.168.1.1:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Everything is working as expected. Hopefully that has given you a good idea of how you can use KinD in VMware Fusion (and indeed VMware Workstation) to become familiar with Kubernetes.


Viewing all articles
Browse latest Browse all 78

Trending Articles