Skip to the content.

How to setup the MetalLB

“MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.”

Reference: https://metallb.universe.tf/

Why

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), Load Balancers will remain in the “pending” state indefinitely when created.

Configure your local routing

You need to add a route to your local machine to access the internal network of Virtualbox.

~$ sudo ip route add 192.168.4.0/27 via 192.168.4.30 dev vboxnet0
~$ sudo ip route add 192.168.4.32/27 via 192.168.4.62 dev vboxnet0
~$ sudo ip route add 192.168.2.0/24 via 192.168.4.254 dev vboxnet0

Access the BusyBox

We need to get the BusyBox IP to access it via ssh

~$ vboxmanage guestproperty get busybox "/VirtualBox/GuestInfo/Net/0/V4/IP"

Expected output:

Value: 192.168.4.57

Use the returned value to access.

~$ ssh debian@192.168.4.57

Install

kube-service-load-balancer

LoadBalancer manifest:

apiVersion: v1
kind: Service
metadata:  
  name: load-balancer-service
  labels:
    app: guestbook
    tier: frontend
spec:
  selector:
    app: guestbook
    tier: frontend
  type: LoadBalancer
  ports:  
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  1. Apply the LoadBalancer service deploy from the kube-service-load-balancer file:

    debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/mvallim/kubernetes-under-the-hood/master/services/kube-service-load-balancer.yaml
    

    The response should look similar to this:

    service/load-balancer-service created
    
  2. Query the state of service load-balancer-service

    debian@busybox:~$ kubectl get service load-balancer-service -o wide
    

    The response should look similar to this:

    NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
    load-balancer-service   LoadBalancer   10.103.128.109   <pending>     80:31969/TCP   7s    app=guestbook,tier=frontend
    

If you look at the status on the EXTERNAL-IP it is <pending> because we need configure MetalLB to provide IP to LoadBalancer service.

Deploy

To install MetalLB, apply the manifest:

  1. To install MetalLB, apply the manifest controller, speaker and namespace:

    debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
    

    The response should look similar to this:

    namespace/metallb-system created
    customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/configurationstates.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/servicebgpstatuses.metallb.io created
    customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created
    serviceaccount/controller created
    serviceaccount/speaker created
    role.rbac.authorization.k8s.io/controller created
    role.rbac.authorization.k8s.io/pod-lister created
    clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
    clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
    rolebinding.rbac.authorization.k8s.io/controller created
    rolebinding.rbac.authorization.k8s.io/pod-lister created
    clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
    clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
    configmap/metallb-excludel2 created
    secret/metallb-webhook-cert created
    service/metallb-webhook-service created
    deployment.apps/controller created
    daemonset.apps/speaker created
    validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
    
  2. Query the state of deploy

    debian@busybox:~$ kubectl get deploy -n metallb-system -o wide
    

    The response should look similar to this:

    NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                               SELECTOR
    controller   1/1     1            1           72s   controller   quay.io/metallb/controller:v0.15.3   app=metallb,component=controller
    

This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The memberlist secret contains the secretkey to encrypt the communication between speakers for the fast dead node detection.

Reference : https://metallb.universe.tf/installation/#installation-by-manifest

Configure

Based on the planed network configuration (here) we will have a metallb-config.yaml as below:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: pool-addresses
  namespace: metallb-system
spec:
  addresses:
    - 192.168.2.2-192.168.2.125 
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - pool-addresses 
  1. Apply the MetalLB configmap from the metallb-config.yaml file:

    debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/mvallim/kubernetes-under-the-hood/master/metallb/metallb-config.yaml
    
  2. Query the state of service load-balancer-service

    debian@busybox:~$ kubectl get service load-balancer-service -o wide
    

    The response should look similar to this:

    NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE     SELECTOR
    load-balancer-service   LoadBalancer   10.99.161.153   192.168.2.2   80:31364/TCP   6m22s   app=guestbook,tier=frontend
    

Now if you look at the status on the EXTERNAL-IP it is 192.168.2.2 and can be access directly from external, without using NodePort or ClusterIp. Remember this IP 192.168.2.2 isn’t assigned to any node. In this example of service we can access using http://192.168.2.2.