How to setup the MetalLB
“MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.”
Reference: https://metallb.universe.tf/
Why
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), Load Balancers will remain in the “pending” state indefinitely when created.
Configure your local routing
You need to add a route to your local machine to access the internal network of Virtualbox.
~$ sudo ip route add 192.168.4.0/27 via 192.168.4.30 dev vboxnet0
~$ sudo ip route add 192.168.4.32/27 via 192.168.4.62 dev vboxnet0
~$ sudo ip route add 192.168.2.0/24 via 192.168.4.254 dev vboxnet0
Access the BusyBox
We need to get the BusyBox IP to access it via ssh
~$ vboxmanage guestproperty get busybox "/VirtualBox/GuestInfo/Net/0/V4/IP"
Expected output:
Value: 192.168.4.57
Use the returned value to access.
~$ ssh debian@192.168.4.57
Install
kube-service-load-balancer
LoadBalancer manifest:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-service
labels:
app: guestbook
tier: frontend
spec:
selector:
app: guestbook
tier: frontend
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
-
Apply the LoadBalancer service deploy from the
kube-service-load-balancerfile:debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/mvallim/kubernetes-under-the-hood/master/services/kube-service-load-balancer.yamlThe response should look similar to this:
service/load-balancer-service created -
Query the state of service
load-balancer-servicedebian@busybox:~$ kubectl get service load-balancer-service -o wideThe response should look similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR load-balancer-service LoadBalancer 10.103.128.109 <pending> 80:31969/TCP 7s app=guestbook,tier=frontend
If you look at the status on the
EXTERNAL-IPit is<pending>because we need configure MetalLB to provide IP toLoadBalancerservice.
Deploy
To install MetalLB, apply the manifest:
-
To install MetalLB, apply the manifest
controller,speakerandnamespace:debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yamlThe response should look similar to this:
namespace/metallb-system created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/configurationstates.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/servicebgpstatuses.metallb.io created customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created serviceaccount/controller created serviceaccount/speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created rolebinding.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created configmap/metallb-excludel2 created secret/metallb-webhook-cert created service/metallb-webhook-service created deployment.apps/controller created daemonset.apps/speaker created validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created -
Query the state of deploy
debian@busybox:~$ kubectl get deploy -n metallb-system -o wideThe response should look similar to this:
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR controller 1/1 1 1 72s controller quay.io/metallb/controller:v0.15.3 app=metallb,component=controller
This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:
- The
metallb-system/controllerdeployment. This is the cluster-wide controller that handles IP address assignments. - The
metallb-system/speakerdaemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable. - Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The memberlist secret contains the secretkey to encrypt the communication between speakers for the fast dead node detection.
Reference : https://metallb.universe.tf/installation/#installation-by-manifest
Configure
Based on the planed network configuration (here) we will have a metallb-config.yaml as below:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: pool-addresses
namespace: metallb-system
spec:
addresses:
- 192.168.2.2-192.168.2.125
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- pool-addresses
-
Apply the MetalLB configmap from the
metallb-config.yamlfile:debian@busybox:~$ kubectl apply -f https://raw.githubusercontent.com/mvallim/kubernetes-under-the-hood/master/metallb/metallb-config.yaml -
Query the state of service
load-balancer-servicedebian@busybox:~$ kubectl get service load-balancer-service -o wideThe response should look similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR load-balancer-service LoadBalancer 10.99.161.153 192.168.2.2 80:31364/TCP 6m22s app=guestbook,tier=frontend
Now if you look at the status on the
EXTERNAL-IPit is192.168.2.2and can be access directly from external, without usingNodePortorClusterIp. Remember this IP192.168.2.2isn’t assigned to any node. In this example of service we can access usinghttp://192.168.2.2.