A general purpose cloud provider for Kube-Vip

Overview

kube-vip-cloud-provider

The Kube-Vip cloud provider is a general purpose cloud-provider for on-prem bare-metal or virtualised environments. It's designed to work with the kube-vip project however if a load-balancer solution follows the Kubernetes conventions then this cloud-provider will provide IP addresses that another solution can advertise.

Architecture

The kube-vip-cloud-provider will only implement the loadBalancer functionality of the out-of-tree cloud-provider functionality. The design is to keep be completely decoupled from any other technologies other than the Kubernetes API, this means that the only contract is between the kube-vip-cloud-provider and the kubernetes services schema. The cloud-provider wont generate configuration information in any other format, it's sole purpose is to ensure that a new service of type:loadBalancer has been assigned an address from an address pool. It does this by updating the <service>.spec.loadBalancerIP with an address from it's IPAM, the responsibility of advertising that address and updating the <service>.status.loadBalancer.ingress.ip is left to the actual load-balancer such as kube-vip.io.

IP address functionality

  • IP address pools by CIDR
  • IP ranges [start address - end address]
  • Multiple pools by CIDR per namespace
  • Multiple IP ranges per namespace (handles overlapping ranges)
  • Setting of static addresses through --load-balancer-ip=x.x.x.x

Installing the kube-vip-cloud-provider

We can apply the controller manifest directly from this repository to get the latest release:

$ kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml

It uses a statefulSet and can always be viewed with the following command:

kubectl describe pods -n kube-system kube-vip-cloud-provider-0

Global and namespace pools

Global pool

Any service in any namespace will take an address from the global pool cidr/range-global.

Namespace pool

A service will take an address based upon its namespace pool cidr/range-namespace. These would look like the following:

$ kubectl get configmap -n kube-system kubevip -o yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: plndr
  namespace: kube-system
data:
  cidr-default: 192.168.0.200/29
  cidr-development: 192.168.0.210/29
  cidr-finance: 192.168.0.220/29
  cidr-testing: 192.168.0.230/29

Create an IP pool using a CIDR

kubectl create configmap --namespace kube-system kubevip --from-literal cidr-global=192.168.0.220/29

Create an IP range

kubectl create configmap --namespace kube-system kubevip --from-literal range-global=192.168.0.200-192.168.0.202

Multiple pools or ranges

We can apply multiple pools or ranges by seperating them with commas.. i.e. 192.168.0.200/30,192.168.0.200/29 or 192.168.0.10-192.168.0.11,192.168.0.10-192.168.0.13

Debugging

The logs for the cloud-provider controller can be viewed with the following command:

kubectl logs -n kube-system kube-vip-cloud-provider-0 -f
Issues
  • Multiple Services Use IP

    Multiple Services Use IP

    I followed the instructions for kube-vip, including following the instructions for kube-vip-cloud-provider, followed the instructions for Rancher.

    I used 192.168.99.100 in my ARP manifest, and I used 192.168.99.100-192.168.99.199 in my ConfigMap.

    I found that after I deployed a workload (nginx) with service type LoadBalancer, it used the same IP as the Traefik included with k3s. This continued even after I redeployed the workload, deleted and deployed the workload again, removed the loadBalancerIP, specified a different loadBalancerIP, and finally deleted and deployed a different workload entirely (alpine):

    $ kubectl get services --all-namespaces
    NAMESPACE             NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
    cattle-fleet-system   gitjob                 ClusterIP      10.43.64.93     <none>           80/TCP                       68m
    cattle-system         rancher                ClusterIP      10.43.148.21    <none>           80/TCP,443/TCP               73m
    cattle-system         rancher-webhook        ClusterIP      10.43.220.243   <none>           443/TCP                      67m
    cattle-system         webhook-service        ClusterIP      10.43.252.16    <none>           443/TCP                      67m
    cert-manager          cert-manager           ClusterIP      10.43.143.203   <none>           9402/TCP                     74m
    cert-manager          cert-manager-webhook   ClusterIP      10.43.52.73     <none>           443/TCP                      74m
    default               alpine                 ClusterIP      10.43.23.22     <none>           80/TCP                       17m
    default               alpine-loadbalancer    LoadBalancer   10.43.101.2     192.168.99.111   80:31615/TCP                 17m
    default               httpd                  ClusterIP      10.43.71.242    <none>           80/TCP                       43m
    default               httpd-loadbalancer     LoadBalancer   10.43.156.159   192.168.99.112   80:30906/TCP                 43m
    default               kubernetes             ClusterIP      10.43.0.1       <none>           443/TCP                      88m
    kube-system           kube-dns               ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       88m
    kube-system           metrics-server         ClusterIP      10.43.222.119   <none>           443/TCP                      88m
    kube-system           traefik                LoadBalancer   10.43.31.248    192.168.99.111   80:31948/TCP,443:30174/TCP   88m
    

    Deploying another workload (httpd) while this one was running resulted in that workload getting the next IP in the range.

    kube-vip-cloud-provider appears to understand that this configuration is not valid:

    $ kubectl logs -n kube-system kube-vip-cloud-provider-0
    [...]
    I0410 23:32:57.719659       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    I0410 23:32:57.719692       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0410 23:32:57.719766       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    I0410 23:32:57.719774       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    I0410 23:32:57.719784       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0410 23:32:57.719790       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    I0410 23:32:57.719798       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    I0410 23:32:57.719808       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0410 23:32:57.719813       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    

    192.168.99.111 appears to be pointing to Traefik based on the HTTP response I get when visiting that URL, not apline-loadbalancer.

    I believe kube-vip-cloud-provider should be assigning the next available IP to alpine-loadbalancer, not 192.168.99.111.

    Would anyone be willing to help me explain this behavior?

    opened by qskwood 15
  • Issue with vip assignment using cidr-global

    Issue with vip assignment using cidr-global

    I noticed a couple of issues when creating a loadbalancer service using kube-vip. Could you please take a look at it? Thank you.

    Issue number 1. Cannot change VIP for the service that did not have hardcoded loadBalancerIP.

    My setup:

    kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
    KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name"
    )
    alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
    kube-vip manifest daemonset --services --inCluster --arp --interface eth0 | kubectl apply -f -
    kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifes
    t/kube-vip-cloud-controller.yaml
    echo $KVVERSION
    v0.4.0
    

    The service:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      type: LoadBalancer
      # not present initially
      #loadBalancerIP: XXX.YY.77.148
      selector:
        app: nginx
      ports:
          # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
        - port: 80
          targetPort: 80
          # Optional field
          # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
          nodePort: 30080
          protocol: TCP
    

    The kubevip configmap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevip
      namespace: kube-system
    data:
      cidr-global: XXX.YY.77.148/32,XXX.YY.77.149/32
    

    Initial setup logs for the above loadbalancer service. Log from kube-vip-cloud-provider-0

    I1117 01:43:38.982374       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I1117 01:43:39.133704       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
    I1117 01:43:39.134248       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
    I1117 01:43:39.134263       1 loadBalancer.go:234] Taking address from [cidr-global] pool
    I1117 01:43:39.134321       1 loadBalancer.go:190] Updating service [nginx-service], with load balancer IPAM address [XXX.YY.77.148]
    E1117 01:43:39.201644       1 controller.go:275] error processing service default/nginx-service (will retry): failed to ensure load balancer: Error updating Service Spec [nginx-service] : Operation cannot be fulfilled on services "nginx-service": the object has been modified; please apply your changes to the latest version and try again
    I1117 01:43:39.201788       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Error updating Service Spec [nginx-service] : Operation cannot be fulfilled on services \"nginx-service\": the object has been modified; please apply your changes to the latest version and try again"
    I1117 01:43:44.204320       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I1117 01:43:44.271326       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
    I1117 01:43:44.271372       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
    I1117 01:43:44.271387       1 loadBalancer.go:234] Taking address from [cidr-global] pool
    I1117 01:43:44.271401       1 loadBalancer.go:190] Updating service [nginx-service], with load balancer IPAM address [XXX.YY.77.148]
    I1117 01:43:44.428180       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    

    Log from kube-vip-ds

    [kube-vip-ds-vhkxp] time="2021-11-17T01:43:38Z" level=info msg="Service [nginx-service] has been addded/modified, it has no assigned external addresses"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:43:39Z" level=info msg="Service [nginx-service] has been addded/modified, it has no assigned external addresses"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:43:44Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:43:44Z" level=info msg="New VIP [XXX.YY.77.148] for [nginx-service/035b0b4c-48ac-42bf-90bb-6b90406b7ed5] "
    

    OK, that worked

    $ k get svc
    NAME            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    kubernetes      ClusterIP      10.43.0.1       <none>          443/TCP        6d10h
    nginx-service   LoadBalancer   10.43.157.128   XXX.YY.77.148   80:30080/TCP   19s
    

    However, if I try to update the service now w/ loadBalancerIP, it fails. I set "loadBalancerIP" XXX.YY.77.149 in the loadbalancer service, and apply it with k apply -f file.

    Log from kube-vip-cloud-provider-0

    I1117 01:49:27.706107       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message="XXX.YY.77.148 -> XXX.YY.77.149"
    I1117 01:49:27.706484       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I1117 01:49:27.800194       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
    I1117 01:49:27.800583       1 loadBalancer.go:164] found existing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5) with vip XXX.YY.77.148
    I1117 01:49:27.801227       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    

    Log from kube-vip-ds

    [kube-vip-ds-vhkxp] time="2021-11-17T01:49:27Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
    

    kubectl - Issue here is that vip is not updated to XXX.YY.77.149

    $ k get svc
    NAME            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    kubernetes      ClusterIP      10.43.0.1       <none>          443/TCP        6d10h
    nginx-service   LoadBalancer   10.43.157.128   XXX.YY.77.148   80:30080/TCP   7m40s`
    

    So, I delete the service now.

    $ k delete -f  ../tmp/test-loadbalancer.yaml
    deployment.apps "nginx-deployment" deleted
    service "nginx-service" deleted
    

    Log from kube-vip-cloud-provider-0

    I1117 01:54:27.307074       1 loadBalancer.go:96] deleting service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
    I1117 01:54:27.307543       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletingLoadBalancer" message="Deleting load balancer"
    I1117 01:54:27.477461       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletedLoadBalancer" message="Deleted load balancer"
    

    Log from kube-vip-ds

    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="[LOADBALANCER] Stopping load balancers"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="[VIP] Releasing the Virtual IP [XXX.YY.77.148]"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg=Stopped
    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Removed [035b0b4c-48ac-42bf-90bb-6b90406b7ed5] from manager, [0] advertised services remain"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Service [nginx-service] has been deleted
    

    Now, I add service back in still with XXX.YY.77.149 hardcoded and that works, but keep going ...

    $ k apply -f  ../tmp/test-loadbalancer.yaml
    deployment.apps/nginx-deployment created
    service/nginx-service created
    

    Log from kube-vip-cloud-provider-0

    I1117 01:58:06.109447       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I1117 01:58:06.612550       1 loadBalancer.go:149] syncing service 'nginx-service' (28dec8dc-d5dc-4d70-92b7-a1da283f3940)
    I1117 01:58:06.704250       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    

    Log from kube-vip-ds

    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="New VIP [XXX.YY.77.149] for [nginx-service/28dec8dc-d5dc-4d70-92b7-a1da283f3940] "
    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Starting advertising address [XXX.YY.77.149] with kube-vip"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Started Load Balancer and Virtual IP"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
    [kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"`
    

    However, this is where the issue number 2 comes in. If I now, delete the loadbalancer service and add it back in with a new hardcoded loadBalancerIP XXX.YY.77.148, it fails to assign the desired VIP.

    Log from kube-vip-cloud-provider-0 (delete followed by add) Delete

    I1117 02:10:27.676977       1 loadBalancer.go:96] deleting service 'nginx-service' (dfa8a94a-f5b0-443a-9228-a5b984c778fe)
    I1117 02:10:27.678238       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletingLoadBalancer" message="Deleting load balancer"
    I1117 02:10:27.822216       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletedLoadBalancer" message="Deleted load balancer"
    

    Add with loadbalancerip XXX.YY.77.148

    I1117 02:11:52.265166       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I1117 02:11:52.388895       1 loadBalancer.go:149] syncing service 'nginx-service' (ce7a213f-cece-41a5-88e4-e5cd8f8552e2)
    I1117 02:11:52.425914       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    

    Log from kube-vip-ds (delete followed by add) Delete

    [kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="[VIP] Releasing the Virtual IP [XXX.YY.77.149]"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg=Stopped
    [kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="Removed [dfa8a94a-f5b0-443a-9228-a5b984c778fe] from manager, [0] advertised services remain"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="Service [nginx-service] has been deleted"
    

    Add with loadbalancer ip XXX.YY.77.148

    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="New VIP [XXX.YY.77.148] for [nginx-service/ce7a213f-cece-41a5-88e4-e5cd8f8552e2] "
    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Starting advertising address [XXX.YY.77.148] with kube-vip"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Started Load Balancer and Virtual IP"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=error msg="Error updating Service [nginx-service] Status: Operation cannot be fulfilled on services \"nginx-service\": the object has been modified; please apply your changes to the latest version and try again"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [rke2-coredns-rke2-coredns] has been addded/modified, it has no assigned external addresses"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [rke2-metrics-server] has been addded/modified, it has no assigned external addresses"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
    [kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [kubernetes] has been addded/modified, it has no assigned external addresses"
    

    kubectl - external IP is shown as pending whereas it should have been assigned the expected XXX.YY.77.148

    $ k get svc
    NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    kubernetes      ClusterIP      10.43.0.1       <none>        443/TCP        6d10h
    nginx-service   LoadBalancer   10.43.234.151   <pending>     80:30080/TCP   21m
    
    opened by tuxdevnow 12
  • fix ip allocation across namespaces

    fix ip allocation across namespaces

    LoadBalancer would poll existing IPs only for a specific namespace. Modified logic to determine if user is using a global cidr or range first. If global it collects all services in all namespaces into existingServiceIPS and determines what is used from there.

    opened by JinxCappa 4
  • Does tag 0.0.1 support arm64?

    Does tag 0.0.1 support arm64?

    Hi. I am getting exec format error when spinning up image kubevip/kube-vip-cloud-provider:0.0.1. According to docker hub that tag does support arm64. I am running k3s on arm64 and VIP is working but cannot get cloud provider to deploy.

    Thanks!

    opened by braucktoon 2
  • ConfigMap IP Range Bug!

    ConfigMap IP Range Bug!

    kubectl create configmap --namespace kube-system kubevip --from-literal range-global=192.168.0.200-192.168.0.202

    if range ip pool is over ip/prefix 24,kube-vip-cloud-provider will be crash.

    example: cidr-global: 192.168.0.0/20 ---- OK range-global: 192.168.0.1-192.168.0.255 ----- OK range-global: 192.168.0.1-192.168.1.255 ------- kube-vip-cloud-provider will crash with no logs. range-global: 192.168.0.1-192.168.0.255,192.168.1.0-192.168.1.255 ------- OK too

    bug 
    opened by hugo345816 2
  • vip is not updated in service.status.loadBalancer.

    vip is not updated in service.status.loadBalancer.

    I installed kube-vip ccm in my local environment.

    When creating svc with type: LoadBalancer, spec.loadBalancerIP is set, but status.loadBalancer is not updated.

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80}],"selector":{"name":"nginx"},"type":"LoadBalancer"}}
      creationTimestamp: "2021-08-10T02:39:37Z"
      finalizers:
      - service.kubernetes.io/load-balancer-cleanup
      name: nginx
      namespace: default
      resourceVersion: "4364132"
      uid: 63134136-0804-4ea9-a548-bc57891385be
    spec:
      clusterIP: 10.103.121.46
      clusterIPs:
      - 10.103.121.46
      externalTrafficPolicy: Cluster
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      loadBalancerIP: 172.28.128.190
      ports:
      - name: http
        nodePort: 31878
        port: 80
        protocol: TCP
        targetPort: 80
      selector:
        name: nginx
      sessionAffinity: None
      type: LoadBalancer
    status:
      loadBalancer: {}
    

    EXTERNAL-IP is still output as <pending>.

    $ kubectl get svc
    NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        8d
    nginx        LoadBalancer   10.103.121.46   <pending>     80:31878/TCP   66s
    

    This is the ccm log.

    I0810 02:39:37.233343       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0810 02:39:37.252303       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    I0810 02:39:37.270534       1 loadBalancer.go:149] syncing service 'nginx' (63134136-0804-4ea9-a548-bc57891385be)
    I0810 02:39:37.270572       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
    I0810 02:39:37.270587       1 loadBalancer.go:234] Taking address from [cidr-global] pool
    I0810 02:39:37.270599       1 loadBalancer.go:190] Updating service [nginx], with load balancer IPAM address [172.28.128.190]
    E0810 02:39:37.285650       1 controller.go:275] error processing service default/nginx (will retry): failed to ensure load balancer: Error updating Service Spec [nginx] : Operation cannot be fulfilled on services "nginx": the object has been modified; please apply your changes to the latest version and try again
    I0810 02:39:37.286119       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Error updating Service Spec [nginx] : Operation cannot be fulfilled on services \"nginx\": the object has been modified; please apply your changes to the latest version and try again"
    I0810 02:39:42.286970       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    I0810 02:39:42.286992       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0810 02:39:42.300116       1 loadBalancer.go:149] syncing service 'nginx' (63134136-0804-4ea9-a548-bc57891385be)
    I0810 02:39:42.300161       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
    I0810 02:39:42.300178       1 loadBalancer.go:234] Taking address from [cidr-global] pool
    I0810 02:39:42.300191       1 loadBalancer.go:190] Updating service [nginx], with load balancer IPAM address [172.28.128.190]
    I0810 02:39:42.317369       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
    
    opened by moonek 2
  • Please push multi-platform images

    Please push multi-platform images

    The Makefile already includes building multi-platform images including arm32 and arm64 support. It would be great if you could push the resulting image to the public registry.

    opened by Knappek 2
  • kube-vip.io docs point to old manifest?

    kube-vip.io docs point to old manifest?

    Documentation at https://kube-vip.io/hybrid/services/#using-the-plunder-cloud-provider-ccm points to using https://kube-vip.io/manifests/controller.yaml to deploy the Plunder Cloud Provider.

    That manifest points to image: plndr/plndr-cloud-provider:0.1.5 which only has an amd64 image. I was able to deploy on arm using plndr/plndr-cloud-provider:9eb45d26, which is the current nightly build.

    However, as the provider has moved to this repository, it looks like I should be using this manifest - https://github.com/kube-vip/kube-vip-cloud-provider/blob/main/manifest/kube-vip-cloud-controller.yaml, which points to kubevip/kube-vip-cloud-provider:0.1 and uses the new name.

    kubevip/kube-vip-cloud-provider:0.1 doesn't have an arm build either - however the latest tag does.

    • It looks like the documentation needs updating to point to the latest kube-vip-cloud-provider manifest?
    • It also looks like the manifest needs to point to a multi-arch image, or the 0.1 image needs to be built for multi-arch?

    Happy to do a PR for both, if my guesses are correct!

    opened by sammcgeown 2
  • Enhancement request - shared IP for TCP and UDP load balancer services

    Enhancement request - shared IP for TCP and UDP load balancer services

    Hi,

    I'm currently using kube-vip and metallb, I'd like to switch over to using kube-vip for my load balancers (love the DHCP option) but I need to be able to have TCP and UDP services sharing an IP address (with metallb I can annotate with metallb.universe.tf/allow-shared-ip: sharedservicename)

    Is there a plan to introduce something similar, or is this not a possibility?

    opened by sammcgeown 2
  • Manifest doesn't point to the latest release

    Manifest doesn't point to the latest release

    On your website you say you can install the latest release with the manifest located at https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml.

    However that still points to the 0.0.1 release rather than the 0.0.2 release.

    opened by evilhamsterman 1
  • Typo `kube=vip`

    Typo `kube=vip`

    Here https://github.com/kube-vip/kube-vip-cloud-provider/blob/ac74cc5aaa259aa6a0da77bdfa9533306c305295/pkg/provider/loadBalancer.go#L46 The label should be kube-vip

    opened by lubronzhan 0
  • Multi cluster loadbalancing

    Multi cluster loadbalancing

    Hi,

    i'm looking for an on premise solution for managing services of type load balancer in multiple kubernetes cluster. This is currently not supported and is maybe out of scope of this project. I recognized that this is possible by configuring kube-vip to use wireguard.

    Would it be an acceptable feature if the kube-vip-cloud-provider could create services in an upstream cluster and mapping node ports of downstream cluster nodes to the upstream cluster service ?

    Are there big disadvantages compared to the wireguard solution with kube-vip?

    Or should it be a separate provider for that because the logic would be completely decoupled from the current kube-vip-cloud-provider implementation?

    I put already some time into an attempt to implement it in general. You can take a look on it at my fork. I've tested it successfully with an ingress-nginx-controller but not with tls enabled until now. If it is acceptable for you i would like to open a pull request.

    There are some limitations until now. Endpoints for Upstream Services are limited to one subset and sessionAffinity is hardcoded. I've added an example configuration to examples directory.

    opened by soer3n 0
  • Configmap seems hardcoded with the kube-system namespace

    Configmap seems hardcoded with the kube-system namespace

    If I create the configmap in the namespace where I have installed kube-vip-cloud-provider, the configuration is not detected:

    I0720 21:17:33.111202       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
    I0720 21:17:33.111178       1 loadBalancer.go:79] syncing service 'kubernetes' (ec4af361-dfc0-42c3-836a-c5b4c5e2ec9b)
    I0720 21:17:33.111326       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
    I0720 21:17:33.122677       1 loadBalancer.go:169] no cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
    I0720 21:17:33.122689       1 loadBalancer.go:172] no global cidr config exists [cidr-global]
    I0720 21:17:33.122692       1 loadBalancer.go:186] no range config for namespace [default] exists in key [range-default] configmap [kubevip]
    I0720 21:17:33.122694       1 loadBalancer.go:189] no global range config exists [range-global]
    E0720 21:17:33.122706       1 controller.go:275] error processing service default/kubernetes (will retry): failed to ensure load balancer: no address pools could be found
    I0720 21:17:33.122730       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: no address pools could be found"
    

    It works only if I created in the kube-system namespace

    opened by gigi206 4
  • Failed to obtain the LoadBalancer IP address

    Failed to obtain the LoadBalancer IP address

    1.env:

    v1.24.3+k3s1 kube-vip:v0.4.4 kube-vip-cloud-provider: v0.0.2 arp

    2.problem: After kubectl expose deploy nginx --port=80 --type=LoadBalancer is executed, external-IP is always in the Pending state.

    3.log: [[email protected]p-01 ~]# kubectl logs kube-vip-cloud-provider-0 -n kube-system I0719 09:47:29.406828 1 serving.go:331] Generated self-signed cert in-memory W0719 09:47:30.554329 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0719 09:47:30.559632 1 controllermanager.go:127] Version: v0.0.0-master+$Format:%h$ W0719 09:47:30.561301 1 controllermanager.go:139] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues I0719 09:47:30.564234 1 secure_serving.go:197] Serving securely on [::]:10258 I0719 09:47:30.564309 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-vip-cloud-controller... I0719 09:47:30.565157 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0719 09:47:30.611134 1 leaderelection.go:253] successfully acquired lease kube-system/kube-vip-cloud-controller I0719 09:47:30.611288 1 event.go:291] "Event occurred" object="kube-system/kube-vip-cloud-controller" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="kube-vip-cloud-provider-0_ec957898-20c7-44de-940a-d97c31304f4a became leader" I0719 09:47:30.611633 1 event.go:291] "Event occurred" object="kube-system/kube-vip-cloud-controller" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="kube-vip-cloud-provider-0_ec957898-20c7-44de-940a-d97c31304f4a became leader" I0719 09:47:30.614564 1 node_controller.go:108] Sending events to api server. W0719 09:47:30.614660 1 core.go:57] failed to start cloud node controller: cloud provider does not support instances W0719 09:47:30.614685 1 controllermanager.go:251] Skipping "cloud-node" I0719 09:47:30.617399 1 node_lifecycle_controller.go:77] Sending events to api server W0719 09:47:30.617470 1 core.go:76] failed to start cloud node lifecycle controller: cloud provider does not support instances W0719 09:47:30.617487 1 controllermanager.go:251] Skipping "cloud-node-lifecycle" I0719 09:47:30.620560 1 controllermanager.go:254] Started "service" I0719 09:47:30.620606 1 core.go:108] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0719 09:47:30.620714 1 controllermanager.go:251] Skipping "route" I0719 09:47:30.620737 1 controller.go:239] Starting service controller I0719 09:47:30.620771 1 shared_informer.go:240] Waiting for caches to sync for service I0719 09:47:30.724097 1 shared_informer.go:247] Caches are synced for service I0719 09:50:09.504447 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer" I0719 09:50:09.553169 1 loadBalancer.go:78] syncing service 'nginx' (d05f7293-bd30-4315-948f-fe4a1794c6c4) I0719 09:50:09.554412 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer" I0719 09:50:09.626388 1 loadBalancer.go:153] no cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip] I0719 09:50:09.626423 1 loadBalancer.go:156] no global cidr config exists [cidr-global] I0719 09:50:09.626437 1 loadBalancer.go:175] no range config for namespace [default] exists in key [range-default] configmap [kubevip] I0719 09:50:09.626446 1 loadBalancer.go:180] Taking address from [range-global] pool I0719 09:50:09.626557 1 addressbuilder.go:95] Rebuilding addresse cache, [6] addresses exist I0719 09:50:09.639181 1 loadBalancer.go:121] Updating service [nginx], with load balancer IPAM address [192.168.60.25] I0719 09:50:09.669527 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message=" -> 192.168.60.25" I0719 09:50:09.669853 1 loadBalancer.go:78] syncing service 'nginx' (d05f7293-bd30-4315-948f-fe4a1794c6c4) I0719 09:50:09.669884 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer" I0719 09:50:09.669899 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer" I0719 09:50:09.669912 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer" I0719 09:50:09.669924 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"

    opened by jieshiyeskey 0
  • statefulset config of env var KUBEVIP_NAMESPACE cannot take effect in the pod

    statefulset config of env var KUBEVIP_NAMESPACE cannot take effect in the pod

    I install kube-vip-cloud-provider via helm chart, it installed into a namespace call "kube-vip", and also apply a configmap name kubevip into this namespace, for details check here install-kube-vip-cloud-provider-for-service-loadbalancer but the pod log still shows that it still retrieve config map from namespace kube-system

    Inf-Alpine01:~$ kubectl describe statefulset kube-vip-cloud-provider -n kube-vip
    Name:               kube-vip-cloud-provider
    Namespace:          kube-vip
    CreationTimestamp:  Sat, 14 May 2022 15:18:35 +0800
      Containers:
       kube-vip-cloud-provider:
        Image:      kubevip/kube-vip-cloud-provider:v0.0.2
        Port:       <none>
        Host Port:  <none>
        Command:
          /kube-vip-cloud-provider
          --leader-elect-resource-name=kube-vip-cloud-controller
        Environment:
          KUBEVIP_NAMESPACE:   kube-vip
          KUBEVIP_CONFIG_MAP:  kubevip
        Mounts:                <none>
      Volumes:                 <none>
    Volume Claims:             <none>
    Events:
      Type    Reason            Age   From                    Message
      ----    ------            ----  ----                    -------
      Normal  SuccessfulCreate  32m   statefulset-controller  create Pod kube-vip-cloud-provider-0 in StatefulSet kube-vip-cloud-provider successful
    Inf-Alpine01:~$ kubectl logs kube-vip-cloud-provider-0 -n kube-vip | grep configMap
    E0514 07:23:18.807292       1 loadBalancer.go:94] Unable to retrieve kube-vip ipam config from configMap [kubevip] in kube-system
    
    opened by fsdrw08 1
  • Loadbalancer IP won't change after deployment

    Loadbalancer IP won't change after deployment

    If you deploy a LoadBalancer without a loadBalancerIP it properly assigns an IP from the range defined. But if you later add a loadBalancerIP it registers an event that looks like it is changing but it never actually does.

    Vice versa if you deploy with a loadBalancerIP and then later remove it again appears to register an event that the IP is changing but never actually changes it.

    For example here I try deploying the coredns chart set to use a LoadBalancer service type but I add and remove the loadBalancerIP option (cleaned extra data for brevity)

    ❯ helm upgrade --install -f values.yaml coredns coredns/coredns
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ k get svc
    NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
    coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   4s
    kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ helm upgrade --install -f values.yaml coredns coredns/coredns
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ k get svc
    NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
    coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   46s
    kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ k describe svc coredns-coredns
    Name:                     coredns-coredns
    Namespace:                default
    Labels:                   app.kubernetes.io/instance=coredns
                              app.kubernetes.io/managed-by=Helm
                              app.kubernetes.io/name=coredns
                              helm.sh/chart=coredns-1.16.7
                              implementation=kube-vip
                              ipam-address=192.168.2.100
    Annotations:              meta.helm.sh/release-name: coredns
                              meta.helm.sh/release-namespace: default
    Selector:                 app.kubernetes.io/instance=coredns,app.kubernetes.io/name=coredns
    Type:                     LoadBalancer
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.43.33.1
    IPs:                      10.43.33.1
    IP:                       192.168.2.2
    LoadBalancer Ingress:     192.168.2.100
    Port:                     udp-53  53/UDP
    TargetPort:               53/UDP
    NodePort:                 udp-53  31509/UDP
    Endpoints:                10.42.0.16:53,10.42.1.73:53,10.42.2.146:53
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:
      Type     Reason                   Age               From                Message
      ----     ------                   ----              ----                -------
      Normal   LoadbalancerIP           49s               service-controller  -> 192.168.2.100
      Normal   EnsuringLoadBalancer     5s (x3 over 49s)  service-controller  Ensuring load balancer
      Warning  UnAvailableLoadBalancer  5s (x3 over 49s)  service-controller  There are no available nodes for LoadBalancer
      Normal   EnsuredLoadBalancer      5s (x3 over 49s)  service-controller  Ensured load balancer
      Normal   LoadbalancerIP           5s                service-controller  192.168.2.100 -> 192.168.2.2
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ helm upgrade --install -f values.yaml coredns coredns/coredns
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ k describe svc coredns-coredns
    Name:                     coredns-coredns
    Namespace:                default
    Labels:                   app.kubernetes.io/instance=coredns
                              app.kubernetes.io/managed-by=Helm
                              app.kubernetes.io/name=coredns
                              helm.sh/chart=coredns-1.16.7
                              implementation=kube-vip
                              ipam-address=192.168.2.101
    Annotations:              meta.helm.sh/release-name: coredns
                              meta.helm.sh/release-namespace: default
    Selector:                 app.kubernetes.io/instance=coredns,app.kubernetes.io/name=coredns
    Type:                     LoadBalancer
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.43.33.1
    IPs:                      10.43.33.1
    IP:                       192.168.2.101
    LoadBalancer Ingress:     192.168.2.100
    Port:                     udp-53  53/UDP
    TargetPort:               53/UDP
    NodePort:                 udp-53  31509/UDP
    Endpoints:                10.42.0.16:53,10.42.1.73:53,10.42.2.146:53
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:
      Type     Reason                   Age                 From                Message
      ----     ------                   ----                ----                -------
      Normal   LoadbalancerIP           2m23s               service-controller  -> 192.168.2.100
      Normal   LoadbalancerIP           99s                 service-controller  192.168.2.100 -> 192.168.2.2
      Normal   EnsuringLoadBalancer     0s (x5 over 2m23s)  service-controller  Ensuring load balancer
      Warning  UnAvailableLoadBalancer  0s (x5 over 2m23s)  service-controller  There are no available nodes for LoadBalancer
      Normal   EnsuredLoadBalancer      0s (x5 over 2m23s)  service-controller  Ensured load balancer
      Normal   LoadbalancerIP           0s                  service-controller  192.168.2.2 ->
      Normal   LoadbalancerIP           0s                  service-controller  -> 192.168.2.101
    ☸ starfleet (default) in homelab/k8s/coredns on  main [?]
    ❯ k get svc
    NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
    coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   2m39s
    kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h
    

    Through out that only 192.168.2.100 ever responded

    opened by evilhamsterman 0
  • A few questions

    A few questions

    Hi, thank you for this maintaining this project. I've just implemented kube-vip + cloud-provider instead of metallb to try creating a HA frontend for my cluster. I have a few questions, sort of an FAQ to ask as I wasn't able to find clear answers to those in the documentation and I want to understand kube-vip well as it is going to be an entry point to the cluster control plane, as such, need to be able to troubleshoot it with deeper understanding.

    1. When running a command with -w, it works fine but after some time of watching, I get the following error. 192.168.88.0 is the VIP
    an error on the server ("unable to decode an event from the watch stream: read tcp 192.168.0.147:53040->192.168.88.0:6443: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.") has prevented the request from succeeding
    
    1. Kubernetes exposes a default service called kubernetes which endpoints point to the nodes hosting the control plane. I'm confused why we can't just use that service and change it to type LoadBalancer instead of having to use kube-vip. Is that even possible (haven't tried yet). If so, what would the benefits of kube-vip be over that approach?
    2. Is is possible to expose a service on an IP outside of the provided range/cidr providing that loadBalancerIP: "x.x.x.x" is provided in the service spec? I tried that and it doesn't seem to work.
    3. MetalLB has the ability to 'request' an IP of a specific range by specifying in the spec, do you think this will be possible in kube-vip anytime soon?
    4. Why is the cloud controller needed and the command argument --services in kube-vip? Do I understand right that the cloud controller just assigns the IP and kube-vip then picks it up from the service and starts listening to it?
    5. In BGP mode, I don't think the UPNP and DHCP features can be utilized. Is it worth updating the docs to explicitly state that?
    6. Regarding point 6, Kubernetes supports loadBalancerClass now (https://kubernetes.io/docs/concepts/services-networking/_print/#load-balancer-class), meaning we should be able to run 2 instances of kube-vip. Theoretically one could be in ARP mode to allow usage of DHCP/UPNP mode. Will this feature be supported?
    7. If UPNP was to be used, would all services get published through it or can this be controlled per service?
    8. Lastly, in UPNP mode, would external IP be discovered and written back to the service spec?

    Thanks for your time Mateusz

    opened by mateuszdrab 2
Releases(v0.0.3)
  • v0.0.3(Jul 19, 2022)

    What's Changed

    • Update kube-vip-cloud-controller.yaml by @thebsdbox in https://github.com/kube-vip/kube-vip-cloud-provider/pull/33
    • Fix typo kube=vip by @lubronzhan in https://github.com/kube-vip/kube-vip-cloud-provider/pull/37
    • fix ip allocation across namespaces by @JinxCappa in https://github.com/kube-vip/kube-vip-cloud-provider/pull/36

    New Contributors

    • @lubronzhan made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/37
    • @JinxCappa made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/36

    Full Changelog: https://github.com/kube-vip/kube-vip-cloud-provider/compare/v0.0.2...v0.0.3

    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Feb 28, 2022)

    What's Changed

    • adds golangci-lint to replace current check validation and cleans up by @stevesloka in https://github.com/kube-vip/kube-vip-cloud-provider/pull/17
    • Add test action to Makefile by @stevesloka in https://github.com/kube-vip/kube-vip-cloud-provider/pull/18
    • Fix IP range deadlock by @hugo345816 in https://github.com/kube-vip/kube-vip-cloud-provider/pull/16
    • Full re-write of IPAM logic by @thebsdbox in https://github.com/kube-vip/kube-vip-cloud-provider/pull/23
    • Remove extra arg in newLoadBalancer by @AshleyDumaine in https://github.com/kube-vip/kube-vip-cloud-provider/pull/25
    • Exlude invalid IPs when range crosses the third octet by @xprt64 in https://github.com/kube-vip/kube-vip-cloud-provider/pull/29
    • Ci builds by @stevesloka in https://github.com/kube-vip/kube-vip-cloud-provider/pull/19
    • Finalise tidying of checks and missing functions by @thebsdbox in https://github.com/kube-vip/kube-vip-cloud-provider/pull/26
    • New release changes by @thebsdbox in https://github.com/kube-vip/kube-vip-cloud-provider/pull/30

    New Contributors

    • @stevesloka made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/17
    • @hugo345816 made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/16
    • @AshleyDumaine made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/25
    • @xprt64 made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/29

    Full Changelog: https://github.com/kube-vip/kube-vip-cloud-provider/compare/v0.0.1...v0.0.2

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Oct 17, 2021)

    What's Changed

    • Fix mistake in README.md by @yaocw2020 in https://github.com/kube-vip/kube-vip-cloud-provider/pull/10
    • Fix to DHCP and IPAM by @thebsdbox in https://github.com/kube-vip/kube-vip-cloud-provider/pull/13

    New Contributors

    • @yaocw2020 made their first contribution in https://github.com/kube-vip/kube-vip-cloud-provider/pull/10

    Full Changelog: https://github.com/kube-vip/kube-vip-cloud-provider/compare/0.1...v0.0.1

    Source code(tar.gz)
    Source code(zip)
Owner
kube-vip
Kubernetes HA/Load Balancer
kube-vip
Automatic-Update-Launcher - A general purpose updater for updating program binaries when update folder exists

Automatic Update Launcher A general purpose updater for updating (web) applicati

Toby Chui 4 Jun 27, 2022
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
Command kube-tmux prints Kubernetes context and namespace to tmux status line.

kube-tmux Command kube-tmux prints Kubernetes context and namespace to tmux status line.

null 7 Sep 10, 2021
Reworking kube-proxy's architecture

Kubernetes Proxy NG The Kubernetes Proxy NG a new design of kube-proxy aimed at allowing Kubernetes business logic to evolve with minimal to no impact

Kubernetes SIGs 140 Aug 5, 2022
A controller managing namespaces deployments, statefulsets and cronjobs objects. Inspired by kube-downscaler.

kube-ns-suspender Kubernetes controller managing namespaces life cycle. kube-ns-suspender Goal Usage Internals The watcher The suspender Flags Resourc

Virtuo 55 Aug 10, 2022
scenario system to check the behavior of kube-scheduler

kube-scheduler-simulator-cli: Kubernetes Scheduler simulator on CLI and scenario system. Hello world. This repository is scenario system for kube-sche

Kensei Nakada 2 Jan 25, 2022
Kube - A simple Kubernetes client, based on client-go

kube A simple Kubernetes client, based on client-go.

PengQi Shi 2 Aug 9, 2022
Container image sweeper kube

container-image-sweeper-kube container-image-sweeper-kube は、不要になった Docker イメージを自

Latona, Inc. 0 Jan 24, 2022
Kube-step-podautoscaler - Controller to scale workloads based on steps

Refer controller/*controller.go for implementation details and explanation for a better understanding.

Danish Prakash 5 Aug 13, 2022
A fake kube-apiserver that serves static data from files

Static KAS A fake kube-apiserver that serves static data from an Openshift must-gather. Dynamically discovers resources and supports logs. Requires go

Alvaro Aleman 28 Jul 13, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file)

Terraform Provider OpenAPI This terraform provider aims to minimise as much as possible the efforts needed from service providers to create and mainta

Daniel I. Khan Ramiro 212 Jul 29, 2022
Terraform provider to help with various AWS automation tasks (mostly all that stuff we cannot accomplish with the official AWS terraform provider)

terraform-provider-awsutils Terraform provider for performing various tasks that cannot be performed with the official AWS Terraform Provider from Has

Cloud Posse 20 Jul 29, 2022
Terraform Provider for Azure (Resource Manager)Terraform Provider for Azure (Resource Manager)

Terraform Provider for Azure (Resource Manager) Version 2.x of the AzureRM Provider requires Terraform 0.12.x and later, but 1.0 is recommended. Terra

null 0 Oct 16, 2021
provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters

provider-kubernetes provider-kubernetes is a Crossplane Provider that enables deployment and management of arbitrary Kubernetes objects on clusters ty

International Business Machines 2 Jan 5, 2022
Terraform-provider-mailcow - Terraform provider for Mailcow

Terraform Provider Scaffolding (Terraform Plugin SDK) This template repository i

Owen Valentine 0 Dec 31, 2021
Provider-generic-workflows - A generic provider which uses argo workflows to define the backend actions.

provider-generic-workflows provider-generic-workflows is a generic provider which uses argo workflows for managing the external resource. This will re

Shailendra Sirohi 0 Jan 1, 2022
Terraform-provider-buddy - Terraform Buddy provider For golang

Terraform Provider for Buddy Documentation Requirements Terraform >= 1.0.11 Go >

Buddy 1 Jan 5, 2022
Hashicups-tf-provider - HashiCups Terraform Provider Tutorial

Terraform Provider HashiCups Run the following command to build the provider go

Andrew Xie 1 Jan 10, 2022