Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)

Overview

Kilo

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes.

Build Status Go Report Card

Overview

Kilo connects nodes in a cluster by providing an encrypted layer 3 network that can span across data centers and public clouds. By allowing pools of nodes in different locations to communicate securely, Kilo enables the operation of multi-cloud clusters. Kilo's design allows clients to VPN to a cluster in order to securely access services running on the cluster. In addition to creating multi-cloud clusters, Kilo enables the creation of multi-cluster services, i.e. services that span across different Kubernetes clusters.

An introductory video about Kilo from KubeCon EU 2019 can be found on youtube.

How it works

Kilo uses WireGuard, a performant and secure VPN, to create a mesh between the different nodes in a cluster. The Kilo agent, kg, runs on every node in the cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations.

Kilo can operate both as a complete, independent networking provider as well as an add-on complimenting the cluster-networking solution currently installed on a cluster. This means that if a cluster uses, for example, Flannel for networking, Kilo can be installed on top to enable pools of nodes in different locations to join; Kilo will take care of the network between locations, while Flannel will take care of the network within locations.

Installing on Kubernetes

Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up.

Step 1: get WireGuard

Kilo requires the WireGuard kernel module to be loaded on all nodes in the cluster. Starting at Linux 5.6, the kernel includes WireGuard in-tree; Linux distributions with older kernels will need to install WireGuard. For most Linux distributions, this can be done using the system package manager. See the WireGuard website for up-to-date instructions for installing WireGuard.

Clusters with nodes on which the WireGuard kernel module cannot be installed can use Kilo by leveraging a userspace WireGuard implementation.

Step 2: open WireGuard port

The nodes in the mesh will require an open UDP port in order to communicate. By default, Kilo uses UDP port 51820.

Step 3: specify topology

By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc. For this, Kilo needs to know which groups of nodes are in each location. If the cluster does not automatically set the topology.kubernetes.io/region node label, then the kilo.squat.ai/location annotation can be used. For example, the following snippet could be used to annotate all nodes with GCP in the name:

for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location="gcp"; done

Kilo allows the topology of the encrypted network to be completely customized. See the topology docs for more details.

Step 4: ensure nodes have public IP

At least one node in each location must have an IP address that is routable from the other locations. If the locations are in different clouds or private networks, then this must be a public IP address. If this IP address is not automatically configured on the node's Ethernet device, it can be manually specified using the kilo.squat.ai/force-endpoint annotation.

Step 5: install Kilo!

Kilo can be installed by deploying a DaemonSet to the cluster.

To run Kilo on kubeadm:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml

To run Kilo on bootkube:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-bootkube.yaml

To run Kilo on Typhoon:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml

To run Kilo on k3s:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml

Add-on Mode

Administrators of existing clusters who do not want to swap out the existing networking solution can run Kilo in add-on mode. In this mode, Kilo will add advanced features to the cluster, such as VPN and multi-cluster services, while delegating CNI management and local networking to the cluster's current networking provider. Kilo currently supports running on top of Flannel.

For example, to run Kilo on a Typhoon cluster running Flannel:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon-flannel.yaml

See the manifests directory for more examples.

VPN

Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo peer resource:

cat <<'EOF' | kubectl apply -f -
apiVersion: kilo.squat.ai/v1alpha1
kind: Peer
metadata:
  name: squat
spec:
  allowedIPs:
  - 10.5.0.1/32
  publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg=
  persistentKeepalive: 10
EOF

This configuration can then be applied to a local WireGuard interface, e.g. wg0, to give it access to the cluster with the help of the kgctl tool:

kgctl showconf peer squat > peer.ini
sudo wg setconf wg0 peer.ini

See the VPN docs for more details.

Multi-cluster Services

A logical application of Kilo's VPN is to connect two different Kubernetes clusters. This allows workloads running in one cluster to access services running in another. For example, if cluster1 is running a Kubernetes Service that we need to access from Pods running in cluster2, we could do the following:

# Register the nodes in cluster1 as peers of cluster2.
for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
    kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig $KUBECONFIG2 apply -f -
done
# Register the nodes in cluster2 as peers of cluster1.
for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do
    kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig $KUBECONFIG1 apply -f -
done
# Create a Service in cluster2 to mirror the Service in cluster1.
cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f -
apiVersion: v1
kind: Service
metadata:
  name: important-service
spec:
  ports:
    - port: 80
---
apiVersion: v1
kind: Endpoints
metadata:
    name: important-service
subsets:
  - addresses:
      - ip: $CLUSTERIP # The cluster IP of the important service on cluster1.
    ports:
      - port: 80
EOF

Now, important-service can be used on cluster2 just like any other Kubernetes Service.

See the multi-cluster services docs for more details.

Analysis

The topology and configuration of a Kilo network can be analyzed using the kgctl command line tool. For example, the graph command can be used to generate a graph of the network in Graphviz format:

kgctl graph | circo -Tsvg > cluster.svg

Issues
  • How to connect two clusters, networking and kubectl?

    How to connect two clusters, networking and kubectl?

    Please let me know if this is inappropriate for here, and I'll close and research independently.

    I'm struggling with my usage case, which I believe is possible with kilo.

    I have a k3s cluster (with kilo) in kvm virtual machines using NAT. These are on a host with a public ip (local network can ping to cluster). I want to use the cluster to run backend services. (longhorn, postgresql etc). I then also want to run a single node cluster on the host machine, to connect to the backend service cluster and provide public facing services.

    I want to be able to kubectl to both clusters (I like octant to monitor/manipulate cluster status). I also want to share services between clusters (predominantly backend to single node cluster). I am currently using kubectl from the host to access the backend cluster. I want to be able to do this remotely as I could for a cluster with a public ip.

    It seems that the correct approach might be to setup kilo vpn between the two clusters. Then vpn server to provide a public endpoint on the public facing single node instance.

    My head is going a bit fluffy with all the internconnected parts, and what needs to be where. (order to connect the two clusters, applying metallb to clusters, how to identify and access the independent clusters and kubectl to them).

    I'm assuming that I will need to install kgctl on the kvm host.

    Any guidance will be appreciated.

    opened by tetricky 26
  • k3s kilo pods crashlooping

    k3s kilo pods crashlooping

    sudo kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-k3s-flannel.yaml

    serviceaccount/kilo created
    clusterrole.rbac.authorization.k8s.io/kilo created
    clusterrolebinding.rbac.authorization.k8s.io/kilo created
    daemonset.apps/kilo created
    

    sudo kubectl logs -f kilo-cz64w -n kube-system

    failed to create Kubernetes config: Error loading config file "/etc/kubernetes/kubeconfig": read /etc/kubernetes/kubeconfig: is a directory

    I think the problem is with kilo-k3s-flannel.yaml:99.

    opened by ibalajiarun 24
  • Can Kilo Work With Nodes Behind NAT?

    Can Kilo Work With Nodes Behind NAT?

    I've been looking into using Kubernetes in an at-edge setting. In this type of deployment I'd be setting up nodes behind other people's NAT'ed networks. Kubernete's API and CRDs make a lot of things I need to do very easy to accomplish (daemonsets, service meshes, and config management, etc) very simple. Wireguard would provide a transparent security layer. In my application I don't mind the high latency with communications to the API server and other things. One thing that I don't control in my deployment is the router at the locations of deployment. I can guarantee there will be a network connection that will be able to speak to my api server but I cannot forward ports.

    I noticed in your documentation you must provide at least one public IP to each region. Is there some way to use kilo to avoid this constraint? What does this constraint come from? Is this some inherent feature of WG?

    enhancement 
    opened by gravypod 18
  • kilo interrupts existing connections every 30s between a public master and a node behind NAT

    kilo interrupts existing connections every 30s between a public master and a node behind NAT

    Hi, I have one master node in region A with a public ip and a worker node in region B behind a NAT (two separate networks).

    After deploy Kilo I annotated both nodes to force external ip (master with own public ip and worker with NAT public ip) and to set the related location on each (master: region-a, worker: region-b).

    Checking the wireguard peers in the master, with wg command, I can see the peer of the worker, with the NAT public ip as endpoint, but the port is different than the wireguard listen port set on the worker node.

    I can also see that the an handshake was made successfully, but after 30s approximately, the Kilo recreate the peer because it detects differences on configuration (log: 'WireGuard configurations are different'), due to the endpoint port and interrupting existing connections.

    How can I solve this? Thanks in advance.

    opened by carlosrmendes 17
  • Why does kilo get cluster config using kubeconfig (or API server URL flag) when it has a service account?

    Why does kilo get cluster config using kubeconfig (or API server URL flag) when it has a service account?

    While setting up kilo on a k3s cluster I noticed that it uses -kubeconfig, or -master to get the config that is used when interfacing with the cluster. This code can be seen here.

    This seems like a security problem - why should kilo require access to my kubeconfig, which contains credentials that have the power to do anything to the cluster? Moreover, it seems redundant: I looked through kilo-k3s-flannel.yaml (which is what I used to get it working) and noticed that a service account is created for kilo with all of the permissions it should need.

    This example (see main.go) uses this function to get the config. Can kilo not use this function instead?

    I'm new to interfacing applications with kubernetes clusters, so if I'm missing something my apologies. If it's be welcome I'd be happy to submit a pull request for this.

    opened by adamkpickering 16
  • remote VPN client

    remote VPN client

    ok enlighten me again! docs are sketchy .... for a vpn now that kilo is running on 3 nodes.... id like to connnect my remote laptop for administration and web access, where are we deriving the keys from ? i imagine this is the "client" public key ? and i can get the server/cluster side with wg showconf ? I do appreciate the assitance, but yes i will say docs are a bit lacking, that being said point me in the right direction, and i can help with documentation once i get the connection sorted.

    VPN

    Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo peer resource:

    cat <<'EOF' | kubectl apply -f - apiVersion: kilo.squat.ai/v1alpha1 kind: Peer metadata: name: squat spec: allowedIPs:

    • 10.5.0.1/32 publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg= persistentKeepalive: 10 EOF
    opened by outbackdingo 15
  • No IP on interface using kubeadm

    No IP on interface using kubeadm

    Disclaimer: I've quite new to Kubernetes so bear with me if something I'm saying doesn't make sense.

    Context: I've been trying out various ways of deploying Kubernetes behind NATs (e.g. spreading a cluster between my house, a friend's and a cloud provider) and kilo seems to perfectly address this usecase as Wireguard has NAT traversal built-in. Other networking points seem to lack as deep support for NAT (maybe with the exception of Weave).

    Issue: No matter what I do, I cannot seem to get kilo to give an IP address to the Wireguard interface if I deploy a cluster with kubeadm. This is even with a single server on a public IP (so shouldn't have any NAT interference).

    I do see the annotations if I do kubectl get node -o yaml but the config doesn't seem to be propagated to the interface and kgctl tells me: did not find any valid Kilo nodes in the cluster I've tried with both standalone kilo as well as integrating with flannel and neither seems to work.

    However, if I instead use k3s, everything works flawlessly as I would expect. I would prefer to use kubeadm to get the more bare-metal experience but I'm happy enough with k3s now.

    Is this a known issue or could I get some extra logs to help debug this? Thanks!

    opened by LalitMaganti 14
  • Unable to access local nodes when using topologies

    Unable to access local nodes when using topologies

    Hi Lucas!

    Thank you for this great project!

    I am trying to set up a multi provider k3s cluster using kilo. The machines roughly look like:

    1. oci location - 2 machines (both only has local ip address assigned to the local interfaces, ext ips are managed via the internet gateway of the cloud provider)
    2. gcp location - 1 machine

    I haven't got to doing a multi provider setup yet. I am still trying to get the 2 machines in oci to talk to each other.

    I am trying to use kilo as CNI directly, the network configuration is as follows: oci-master - internal ip 10.1.20.3, external (using placeholders here) oci-worker - internal ip 10.1.20.2, external

    The machines can ping each other directly using the 10.1.20.x addresses.

    My issue is that, once they come up, I can't get the pods launched on each machine to talk to each other. I can ping pods on the machines that runs it, but not from master -> worker and vice versa.

    on my laptop

    > kubectl get po -o wide
    NAME                        READY   STATUS    RESTARTS   AGE   IP          NODE         NOMINATED NODE   READINESS GATES
    my-nginx-74f94c7795-j7kzv   1/1     Running   0          99m   10.42.1.5   oci-worker   <none>           <none>
    

    but on oci-master

    > ping 10.42.1.5
    PING 10.42.1.5 (10.42.1.5): 56 data bytes
    ^C--- 10.42.1.5 ping statistics ---
    4 packets transmitted, 0 packets received, 100% packet loss
    

    I think i should be able to reach every pod from any node on the cluster (AFAIK).

    Please let me know if there is additional info that would be helpful to include!

    My setup details are below

    I provisioned the machines as follows: oci-master:

    k3sup install \
        --ip <ext-master-ip> \
        --k3s-version 'v1.17.0+k3s.1' \
        --k3s-extra-args '--no-flannel --no-deploy metrics-server --no-deploy servicelb --no-deploy traefik --default-local-storage-path /k3s-local-storage --node-name oci-master --node-external-ip <ext-master-ip> --node-ip 10.1.20.3'
    
    kubectl annotate node oci-master \
        kilo.squat.ai/force-external-ip="<ext-ip-master>/32" \
        kilo.squat.ai/force-internal-ip="10.1.20.3/24" \
        kilo.squat.ai/location="oci" \
        kilo.squat.ai/leader="true" 
    

    oci-worker:

    k3sup join \
        --ip <ext-worker-ip> \
        --server-ip <ext-master-ip> \
        --k3s-version 'v1.17.0+k3s.1' \
        --k3s-extra-args '--no-flannel --node-name oci-worker --node-external-ip <ext-worker-ip> --node-ip 10.1.20.2''
    
    kubectl annotate node oci-worker \
        kilo.squat.ai/force-external-ip="<ext-worker-ip>/32" \
        kilo.squat.ai/force-internal-ip="10.1.20.2/24" \
        kilo.squat.ai/location="oci" 
    

    Finally setting up kilo

    kubectl apply -f k3s-kilo.yaml
    

    I had to do the same changes suggested in #11 and #27 to make sure that kilo pods has the correct permissions, but I was able to get the pods to come up correctly.

    I am able see logs like these when taking pod logs (with log-level=debug) on oci-master

    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T09:12:46.095414595Z"}
    {"caller":"mesh.go:412","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"ExternalIP":{"IP":"<ext-ip-master>","Mask":"/////w=="},"Key":"<key>","InternalIP":{"IP":"10.1.20.3","Mask":"////AA=="},"LastSeen":1581239566,"Leader":true,"Location":"oci","Name":"oci-master","Subnet":{"IP":"10.42.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="}},"ts":"2020-02-09T09:12:46.095454981Z"}
    

    on oci-worker

    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T10:44:48.564218597Z"}
    {"caller":"mesh.go:508","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2020-02-09T10:45:18.478913052Z"}
    {"caller":"mesh.go:675","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2020-02-09T10:45:18.4804814Z"}      
    {"caller":"mesh.go:410","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2020-02-09T10:45:18.481320232Z"}  
    {"caller":"mesh.go:412","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"ExternalIP":{"IP":"<ext-ip-worker>","Mask":"/////w=="},"Key":"<key>","InternalIP":{"IP":"10.1.20.2","Mask":"////AA=="},"LastSeen":1581245118,"Leader":false,"Location":"oci","Name":"oci-worker","Subnet":{"IP":"10.42.1.0","Mask":"////AA=="},"WireGuardIP":null},"ts":"2020-02-09T10:45:18.481367592Z"}
    

    oci-master

    > ifconfig
    ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
            inet 10.1.20.3  netmask 255.255.255.0  broadcast 10.1.20.255
            inet6 fe80::200:17ff:fe02:2f31  prefixlen 64  scopeid 0x20<link>
            ether 00:00:17:02:2f:31  txqueuelen 1000  (Ethernet)
            RX packets 945623  bytes 2361330833 (2.3 GB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 851708  bytes 304538145 (304.5 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
            inet 10.4.0.1  netmask 255.255.0.0  destination 10.4.0.1
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 1354843  bytes 457783326 (457.7 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1354843  bytes 457783326 (457.7 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 8980
            inet 10.42.0.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 5  bytes 420 (420.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    > ip route
    default via 10.1.20.1 dev ens3
    default via 10.1.20.1 dev ens3 proto dhcp src 10.1.20.3 metric 100
    10.1.20.0/24 dev ens3 proto kernel scope link src 10.1.20.3
    10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
    10.42.1.0/24 via 10.1.20.2 dev tunl0 proto static onlink
    

    oci-worker

    > ifconfig
    ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
            inet 10.1.20.2  netmask 255.255.255.0  broadcast 10.1.20.255
            inet6 fe80::200:17ff:fe02:1682  prefixlen 64  scopeid 0x20<link>
            ether 00:00:17:02:16:82  txqueuelen 1000  (Ethernet)
            RX packets 231380  bytes 781401888 (781.4 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 221393  bytes 29979034 (29.9 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.42.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::38f7:34ff:fed9:897e  prefixlen 64  scopeid 0x20<link>
            ether 26:d7:aa:ce:37:f8  txqueuelen 1000  (Ethernet)
            RX packets 21865  bytes 10732037 (10.7 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 19269  bytes 7046706 (7.0 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 78258  bytes 29977684 (29.9 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 78258  bytes 29977684 (29.9 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 8980
            inet 10.42.1.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 10  bytes 840 (840.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth5ee1a633: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::24d7:aaff:fece:37f8  prefixlen 64  scopeid 0x20<link>
            ether 26:d7:aa:ce:37:f8  txqueuelen 0  (Ethernet)
            RX packets 12748  bytes 10219673 (10.2 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9890  bytes 4818258 (4.8 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth965708c2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::9cfc:9dff:fef1:dc7a  prefixlen 64  scopeid 0x20<link>
            ether 9e:fc:9d:f1:dc:7a  txqueuelen 0  (Ethernet)
            RX packets 22  bytes 1636 (1.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 21  bytes 1754 (1.7 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethd34408af: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::5077:76ff:fe3a:1b01  prefixlen 64  scopeid 0x20<link>
            ether 52:77:76:3a:1b:01  txqueuelen 0  (Ethernet)
            RX packets 9091  bytes 816526 (816.5 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9442  bytes 2233086 (2.2 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    > ip route
    default via 10.1.20.1 dev ens3
    default via 10.1.20.1 dev ens3 proto dhcp src 10.1.20.2 metric 100
    10.1.20.0/24 dev ens3 proto kernel scope link src 10.1.20.2
    10.4.0.1 via 10.1.20.3 dev tunl0 proto static onlink
    10.42.0.0/24 via 10.1.20.3 dev tunl0 proto static onlink
    10.42.1.0/24 dev kube-bridge proto kernel scope link src 10.42.1.1
    169.254.0.0/16 dev ens3 proto dhcp scope link src 10.1.20.2 metric 100
    

    Other things that I've tried

    Interestingly setting up another machine with a different region, I was able to see that the wireguard interfaces come up with correct allowed-ips and was even able to ping 10.1.20.2 (oci-worker) directly over wireguard. Presumably that is going from gcp-worker -> oci-master (leader for oci location) -> oci-worker

    opened by kelvl 14
  • How to create a K8s cluster with nodes in different cloud service providers

    How to create a K8s cluster with nodes in different cloud service providers

    Hi squat,

    In the issue #9 that I posted, I first made a WireGuard connection between VmOnAWS and VmOnGCP, then created a K8s cluster with those nodes, and applied Kilo as a final step.

    But in the issue #8, you said that

    For Kilo to pick up an existing Wireguard interface on the host is not supported.

    So I wanted to re-create a K8s cluster with nodes in different cloud service providers, and here is the point where my questions arise.

    When I follow the Kilo installation guide..

    Step 1: install WireGuard

    I am using Ubuntu as an guest VM's OS, so I installed WireGuard with apt install wireguard. [email protected]# which wg

    /usr/bin/wg
    

    And for the clean state, I destroyed the existing WireGuard connection between VmOnAWS and VmOnGCP.

    # wg-quick down ./wg0.conf

    [#] wg showconf wg0
    [#] ip link delete dev wg0
    

    Step 2: open WireGuard port

    I opened UDP port 51820 for my AWS SecurityGroup and GCP SecurityGroup.

    Step 3: specify topology

    The instruction makes me kubectl annotate k8s nodes, but I have no k8s cluster since I started from the clean state, neither can I make a k8s cluster (without using WireGuard or sth) since one node is in AWS and another is in GCP.


    One of the my guesses is that

    1. Create a k8s cluster on AWS.
    2. Enable Kilo VPN between the k8s cluster on AWS and a VM on GCP.
    3. Make the VM on GCP a worker node of the k8s cluster on AWS. which I have not tried yet.

    Do you think this will work / make sense..?

    opened by jihoon-seo 13
  • kilo question - informational

    kilo question - informational

    installing kilo on 4 nodes, 2 different public network space, does kilo encrypt comms between nodes ?

    how does one validate this encryption, meaning if it "automagically" encrypts node comunications how can i verify it from node to node?

    also, can my remote "workstation" (laptop) from far away be a client to the cluster, and also utilize the vpn for internet access along with management. Sorry the docs werent so clear to me. kilo is installed.

    opened by outbackdingo 13
  • graph handler and kgctl graph render wrong topology

    graph handler and kgctl graph render wrong topology

    Because the graph handler and kgctl graph don't resolve endpoints (eg. dns names from force-endpoint annotation) before building the topology, it is possible that the rendered graph differs from the actual topology.

    opened by leonnicolas 0
  • pkg/mesh/routes.go: add iptbales forward allow rules for segment.

    pkg/mesh/routes.go: add iptbales forward allow rules for segment.

    Before this commit (see #241 and #244) we added the forward ALLOW rule only for the node's pod CIDR and not all pod CIDRs of a location. This commit adds the forward ALLOW rule for packages from (source) and to (destination) all pod CIDRs of the location if the node is a leader node.

    Signed-off-by: leonnicolas [email protected]

    opened by leonnicolas 0
  • Single server control plane (kubeadm 1.22), no connectivity on fresh install.

    Single server control plane (kubeadm 1.22), no connectivity on fresh install.

    I am setting up a cluster, and got an issue when using kilo as the only CNI

    kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/arm64"}
    
    uname -a
    Linux k8s-master-01 5.11.0-1019-oracle #20~20.04.1-Ubuntu SMP Tue Sep 21 14:20:46 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
    
    # cat /etc/lsb-release
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=20.04
    DISTRIB_CODENAME=focal
    DISTRIB_DESCRIPTION="Ubuntu 20.04.3 LTS"
    

    Installed kilo with

    kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/crds.yaml
    kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml
    
    # kubectl get po -A
    NAMESPACE     NAME                                    READY   STATUS             RESTARTS        AGE
    kube-system   coredns-78fcd69978-l5rdb                1/1     Running            0               73m
    kube-system   coredns-78fcd69978-nqxtj                1/1     Running            0               73m
    kube-system   etcd-k8s-master-01                      1/1     Running            32              73m
    kube-system   kilo-g4kgq                              1/1     Running            0               2m11s
    kube-system   kpubber-7lsgw                           0/1     CrashLoopBackOff   8 (2m14s ago)   26m
    kube-system   kube-apiserver-k8s-master-01            1/1     Running            4               73m
    kube-system   kube-controller-manager-k8s-master-01   1/1     Running            7               73m
    kube-system   kube-proxy-zrr5m                        1/1     Running            0               73m
    kube-system   kube-scheduler-k8s-master-01            1/1     Running            18              73m
    
    # kubectl get nodes
    NAME            STATUS   ROLES                  AGE   VERSION
    k8s-master-01   Ready    control-plane,master   73m   v1.22.2
    
    # kubectl get nodes k8s-master-01 -o yaml | grep annot -A 10
      annotations:
        kilo.squat.ai/discovered-endpoints: '{}'
        kilo.squat.ai/endpoint: 240.238.90.130:51820
        kilo.squat.ai/force-endpoint: 240.238.90.130
        kilo.squat.ai/granularity: location
        kilo.squat.ai/internal-ip: 10.0.0.27/24
        kilo.squat.ai/key: zUcsZwT7Qlz/wmzKfHzHNP5oCNaHpVLbcSiR9G64zgU=
        kilo.squat.ai/last-seen: "1633713905"
        kilo.squat.ai/location: oracle-alekc
        kilo.squat.ai/wireguard-ip: 10.4.0.1/16
        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    
    # wg
    interface: kilo0
      public key: zUcsZwT7Qlz/wmzKfHzHNP5oCNaHpVLbcSiR9G64zgU=
      private key: (hidden)
      listening port: 51820
    
    IPTABLES

    # iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
    KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
    KUBE-FIREWALL  all  --  anywhere             anywhere
    KILO-IPIP  ipencap--  anywhere             anywhere             /* Kilo: jump to IPIP chain */
    DROP       ipencap--  anywhere             anywhere             /* Kilo: reject other IPIP traffic */
    
    Chain FORWARD (policy DROP)
    target     prot opt source               destination
    KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
    KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
    KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
    DOCKER-USER  all  --  anywhere             anywhere
    DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    DOCKER     all  --  anywhere             anywhere
    ACCEPT     all  --  anywhere             anywhere
    ACCEPT     all  --  anywhere             anywhere
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
    KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
    KUBE-FIREWALL  all  --  anywhere             anywhere
    
    Chain DOCKER (1 references)
    target     prot opt source               destination
    
    Chain DOCKER-ISOLATION-STAGE-1 (1 references)
    target     prot opt source               destination
    DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
    RETURN     all  --  anywhere             anywhere
    
    Chain DOCKER-ISOLATION-STAGE-2 (1 references)
    target     prot opt source               destination
    DROP       all  --  anywhere             anywhere
    RETURN     all  --  anywhere             anywhere
    
    Chain DOCKER-USER (1 references)
    target     prot opt source               destination
    RETURN     all  --  anywhere             anywhere
    
    Chain KILO-IPIP (1 references)
    target     prot opt source               destination
    ACCEPT     all  --  k8s-master-1.subnet09021850.vcn09021850.oraclevcn.com  anywhere             /* Kilo: allow IPIP traffic */
    
    Chain KUBE-EXTERNAL-SERVICES (2 references)
    target     prot opt source               destination
    
    Chain KUBE-FIREWALL (2 references)
    target     prot opt source               destination
    DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
    DROP       all  -- !localhost/8          localhost/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
    
    Chain KUBE-FORWARD (1 references)
    target     prot opt source               destination
    DROP       all  --  anywhere             anywhere             ctstate INVALID
    ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
    ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
    ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
    
    Chain KUBE-KUBELET-CANARY (0 references)
    target     prot opt source               destination
    
    Chain KUBE-NODEPORTS (1 references)
    target     prot opt source               destination
    
    Chain KUBE-PROXY-CANARY (0 references)
    target     prot opt source               destination
    
    Chain KUBE-SERVICES (2 references)
    target     prot opt source               destination
    

    My understanding is that at this point (especially since we are just 1 single node), it should still be working

    However, this is what's happening:

    # kubectl get pods -o wide -A
    NAMESPACE     NAME                                    READY   STATUS             RESTARTS        AGE   IP           NODE            NOMINATED NODE   READINESS GATES
    kube-system   coredns-78fcd69978-l5rdb                1/1     Running            0               81m   10.244.0.3   k8s-master-01   <none>           <none>
    kube-system   coredns-78fcd69978-nqxtj                1/1     Running            0               81m   10.244.0.4   k8s-master-01   <none>           <none>
    kube-system   etcd-k8s-master-01                      1/1     Running            32              81m   10.0.0.27    k8s-master-01   <none>           <none>
    kube-system   kilo-g4kgq                              1/1     Running            0               10m   10.0.0.27    k8s-master-01   <none>           <none>
    kube-system   kpubber-7lsgw                           0/1     CrashLoopBackOff   9 (4m30s ago)   35m   10.244.0.2   k8s-master-01   <none>           <none>
    kube-system   kube-apiserver-k8s-master-01            1/1     Running            4               81m   10.0.0.27    k8s-master-01   <none>           <none>
    kube-system   kube-controller-manager-k8s-master-01   1/1     Running            7               81m   10.0.0.27    k8s-master-01   <none>           <none>
    kube-system   kube-proxy-zrr5m                        1/1     Running            0               81m   10.0.0.27    k8s-master-01   <none>           <none>
    kube-system   kube-scheduler-k8s-master-01            1/1     Running            18              81m   10.0.0.27    k8s-master-01   <none>           <none>
    
    kubectl run --rm -it --image=alpine -- ash
    # ping 10.244.0.3
    PING 10.244.0.3 (10.244.0.3): 56 data bytes
    ^C
    --- 10.244.0.3 ping statistics ---
    4 packets transmitted, 0 packets received, 100% packet loss
    
    / #  nslookup kubernetes.default
    ;; connection timed out; no servers could be reached
    
    cat /etc/resolv.conf
    nameserver 10.96.0.10
    search default.svc.cluster.local svc.cluster.local cluster.local vcn09021850.oraclevcn.com
    options ndots:5
    

    If I remove the kilo and install flannel for example

    # kubectl delete -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml
    configmap "kilo" deleted
    serviceaccount "kilo" deleted
    clusterrole.rbac.authorization.k8s.io "kilo" deleted
    clusterrolebinding.rbac.authorization.k8s.io "kilo" deleted
    daemonset.apps "kilo" deleted
    
    # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    
    
    rm /etc/cni/net.d/10-kilo.conflist
    reboot
    
    # kubectl run --rm -it --image=alpine -- ash
    # nslookup kubernetes.default
    Server:		10.96.0.10
    Address:	10.96.0.10:53
    
    ** server can't find kubernetes.default: NXDOMAIN
    
    ** server can't find kubernetes.default: NXDOMAIN
    
    / #
    
    # ping google.com
    PING google.com (142.250.179.238): 56 data bytes
    64 bytes from 142.250.179.238: seq=0 ttl=117 time=1.336 ms
    ^C
    --- google.com ping statistics ---
    1 packets transmitted, 1 packets received, 0% packet loss
    round-trip min/avg/max = 1.336/1.336/1.336 ms
    / #
    
    

    everything works. If I go backwards (delete flannel, cni config, install kilo, reboot) networking is not working anymore

    p.s.

    # kubectl logs kilo-r5sf5 -n kube-system
    {"caller":"main.go:273","msg":"Starting Kilo network mesh 'f90288133d5543398d032913a7d558960ccf2ad0'.","ts":"2021-10-08T17:55:14.142951243Z"}
    {"caller":"cni.go:61","component":"kilo","err":"failed to read IPAM config from CNI config list file: no IP ranges specified","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2021-10-08T17:55:14.243883511Z"}
    {"caller":"cni.go:69","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2021-10-08T17:55:14.243920391Z"}
    {"CIDR":"10.244.0.0/24","caller":"cni.go:74","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2021-10-08T17:55:14.243932511Z"}
    {"caller":"mesh.go:545","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-10-08T17:55:14.49968136Z"}
    
    opened by alekc 2
  • etcd traffic not going through the Kilo interface

    etcd traffic not going through the Kilo interface

    Hi, I have a cluster (3 master nodes) with an embedded etcd that was set up with K3S and --no-flannel flag. After installing Kilo with --mesh-granularity=full argument, it's working except the etcd peer communication traffic which is not going through the Kilo tunnel.

    Is there a reason why? What can I do to make it work?

    opened by Sandah 2
  • migrate to golang.zx2c4.com/wireguard/wgctrl

    migrate to golang.zx2c4.com/wireguard/wgctrl

    This commit introduces the usage of wgctrl. It avoids the usage of exec calls of the wg command and parsing the output of wg show.

    opened by leonnicolas 5
  • Include Example WireGuard Exporter DaemonSet

    Include Example WireGuard Exporter DaemonSet

    I think it could be useful for users to know how to monitor the Kilo mesh in their clusters. For this, we could leverage an existing project that implements a Prometheus exporter for WireGuard, e.g. https://github.com/MindFlavor/prometheus_wireguard_exporter.

    I propose we write an example DaemonSet for this exporter along with a matching PodMonitor and include them in our manifests directory. We could then add a page to our docs explaining how to monitor Kilo with Prometheus and suggest using this new configuration in conjunction with our existing PodMonitor.

    opened by squat 0
  • Prepare move to kilo-io

    Prepare move to kilo-io

    This commit changes all package paths from squat/kilo to kilo-io/kilo and the docker image name from squat/kilo to kiloio/squat. The API name and comments regarding the website kilo.squat.ai are unchanged.

    Signed-off-by: leonnicolas [email protected]

    opened by leonnicolas 0
  • Add ability to set wireguard IP via node annotation

    Add ability to set wireguard IP via node annotation

    In cases where one needs prior knowledge of the nodes IP to for example;

    1. Create host entries that resolve hostnames to kilo created wireguard IPs A node label that can be set on cluster creation could be implemented to achieve this. Suggesting to use a label as for most k8s distros, this can be set on node/cluster creation. This label should indicate a preferred wireguard IP to be set by kilo and ignored if another node already exists with the IP. This means all other functionality can stay the same. So in short,
    2. Check Node Label e.g. kilo.squat.ai/wireguard-ip:"<IPV4 value>"
    3. If label exists, check if IP is valid for the kilo subnet and is available.
    4. Use the IP for the kilo interface @squat
    opened by jawabuu 2
  • Kilo on RKE cluster: non-maser PODs CANNOT access to the kube-apiserver endpoint

    Kilo on RKE cluster: non-maser PODs CANNOT access to the kube-apiserver endpoint

    Hi @squat, I strongly believe in your project and hope you could help me with the final issue...

    I have successfully installed kilo on my RKE cluster as a CNI:

    • all PODs are working,
    • service discovery is working,
    • connection between RKE nodes is secured
    • PODs in the master-node have access to kube-apiserver endpoint

    but I have caught another issue: PODs in the non-master nodes CANNOT access to kube-apiserver (kubernetes.default -> 10.45.0.1:443 -> 172.25.132.35:6443)

    It is critical for k8s operators like Istio, Prometheus, Infinispan, etc. and I got stuck with that...

    I guess something wrong with network routing (KILO-NAT, KILO-IPIP iptables). Please check my k8s configuration, test PODs, and iptables of master-node and node-1, probably you might find a reason:

    kubectl get nodes -o wide

    NAME                        STATUS   ROLES                      AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
    foundation-musanin-master   Ready    controlplane,etcd,worker   47h   v1.17.4   172.25.132.35    <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
    foundation-musanin-node-1   Ready    worker                     47h   v1.17.4   172.25.132.55    <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
    foundation-musanin-node-2   Ready    worker                     47h   v1.17.4   172.25.132.230   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
    
    ## Install kilo DaemonSet:  https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml
     
    kubectl delete ds -n kube-system kilo
     
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kilo
      namespace: kube-system
      labels:
        app.kubernetes.io/name: kilo
    data:
      cni-conf.json: |
        {
           "cniVersion":"0.3.1",
           "name":"kilo",
           "plugins":[
              {
                 "name":"kubernetes",
                 "type":"bridge",
                 "bridge":"kube-bridge",
                 "isDefaultGateway":true,
                 "forceAddress":true,
                 "mtu": 1420,
                 "ipam":{
                    "type":"host-local"
                 }
              },
              {
                 "type":"portmap",
                 "snat":true,
                 "capabilities":{
                    "portMappings":true
                 }
              }
           ]
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kilo
      namespace: kube-system
      labels:
        app.kubernetes.io/name: kilo
        app.kubernetes.io/part-of: kilo
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: kilo
          app.kubernetes.io/part-of: kilo
      template:
        metadata:
          labels:
            app.kubernetes.io/name: kilo
            app.kubernetes.io/part-of: kilo
        spec:
          serviceAccountName: kilo
          hostNetwork: true
          containers:
          - name: kilo
            image: squat/kilo
            imagePullPolicy: Always ## image updates are very high
            ## list args details: https://kilo.squat.ai/docs/kg/#usage
            args:
            - --kubeconfig=/etc/kubernetes/config
            - --hostname=\$(NODE_NAME)
            - --create-interface=true
            - --interface=kilo0
            - --log-level=debug
            env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            ports:
            - containerPort: 1107
              name: metrics
            securityContext:
              privileged: true
            volumeMounts:
            - name: cni-conf-dir
              mountPath: /etc/cni/net.d
            - name: kilo-dir
              mountPath: /var/lib/kilo
            - name: kubeconfig
              mountPath: /etc/kubernetes
              readOnly: true
            - name: lib-modules
              mountPath: /lib/modules
              readOnly: true
            - name: xtables-lock
              mountPath: /run/xtables.lock
              readOnly: false
          initContainers:
          - name: install-cni
            image: squat/kilo
            ## CAUTION!!!: init-container removes all CNI configs in the node (dir: /etc/cni/net.d)
            command:
            - /bin/sh
            - -c
            - set -e -x;
              cp /opt/cni/bin/* /host/opt/cni/bin/;
              TMP_CONF="\$CNI_CONF_NAME".tmp;
              echo "\$CNI_NETWORK_CONFIG" > \$TMP_CONF;
              rm -f /host/etc/cni/net.d/*;
              mv \$TMP_CONF /host/etc/cni/net.d/\$CNI_CONF_NAME
            env:
            - name: CNI_CONF_NAME
              value: 10-kilo.conflist
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: kilo
                  key: cni-conf.json
            volumeMounts:
            - name: cni-bin-dir
              mountPath: /host/opt/cni/bin
            - name: cni-conf-dir
              mountPath: /host/etc/cni/net.d
          tolerations:
          - effect: NoSchedule
            operator: Exists
          - effect: NoExecute
            operator: Exists
          volumes:
          - name: cni-bin-dir
            hostPath:
              path: /opt/cni/bin
          - name: cni-conf-dir
            hostPath:
              path: /etc/cni/net.d
          - name: kilo-dir
            hostPath:
              path: /var/lib/kilo
          - name: kubeconfig
            configMap:
              name: kubeconfig-in-cluster
          - name: lib-modules
            hostPath:
              path: /lib/modules
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
    EOF
    

    kubectl get pods --all-namespaces -o wide

    NAMESPACE            NAME                                      READY   STATUS      RESTARTS   AGE     IP               NODE                        NOMINATED NODE   READINESS GATES
    default              master                                    1/1     Running     0          10s     10.44.0.18       foundation-musanin-master   <none>           <none>
    default              node1                                     1/1     Running     0          10s     10.44.2.31       foundation-musanin-node-1   <none>           <none>
    default              node2                                     1/1     Running     0          10s     10.44.1.20       foundation-musanin-node-2   <none>           <none>
    kube-system          coredns-7c5566588d-gwjg4                  1/1     Running     1          6h      10.44.2.23       foundation-musanin-node-1   <none>           <none>
    kube-system          coredns-7c5566588d-kxphm                  1/1     Running     1          6h      10.44.1.18       foundation-musanin-node-2   <none>           <none>
    kube-system          coredns-autoscaler-65bfc8d47d-kqs2m       1/1     Running     1          6h10m   10.44.2.22       foundation-musanin-node-1   <none>           <none>
    kube-system          kilo-gmkrl                                1/1     Running     0          88m     172.25.132.55    foundation-musanin-node-1   <none>           <none>
    kube-system          kilo-sj9jz                                1/1     Running     0          88m     172.25.132.230   foundation-musanin-node-2   <none>           <none>
    kube-system          kilo-tsn5v                                1/1     Running     0          88m     172.25.132.35    foundation-musanin-master   <none>           <none>
    kube-system          metrics-server-6b55c64f86-m5t28           1/1     Running     2          28h     10.44.0.13       foundation-musanin-master   <none>           <none>
    kube-system          rke-coredns-addon-deploy-job-j2k85        0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
    kube-system          rke-ingress-controller-deploy-job-48m49   0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
    kube-system          rke-metrics-addon-deploy-job-8vdhx        0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
    kube-system          rke-network-plugin-deploy-job-g796v       0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
    local-path-storage   local-path-provisioner-5bd6f65fdf-2kqks   1/1     Running     2          22h     10.44.0.14       foundation-musanin-master   <none>           <none>
    pf                   echoserver-977db48cd-f4msv                1/1     Running     1          21h     10.44.2.21       foundation-musanin-node-1   <none>           <none>
    

    kubectl get service --all-namespaces -o wide

    NAMESPACE       NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE   SELECTOR
    default         kubernetes                           ClusterIP   10.45.0.1       <none>        443/TCP                        28h   <none>
    ingress-nginx   default-http-backend                 ClusterIP   10.45.193.115   <none>        80/TCP                         28h   app=default-http-backend
    kube-system     kube-dns                             ClusterIP   10.45.0.10      <none>        53/UDP,53/TCP,9153/TCP         28h   k8s-app=kube-dns
    kube-system     metrics-server                       ClusterIP   10.45.53.231    <none>        443/TCP                        28h   k8s-app=metrics-server
    kube-system     monitoring-kube-prometheus-kubelet   ClusterIP   None            <none>        10250/TCP,10255/TCP,4194/TCP   20h   <none>
    pf              echoserver                           ClusterIP   10.45.214.208   <none>        8080/TCP                       22h   app=echoserver
    

    kubectl get endpoints --all-namespaces -o wide

    NAMESPACE            NAME                                 ENDPOINTS                                                                  AGE
    default              kubernetes                           172.25.132.35:6443                                                         28h
    ingress-nginx        default-http-backend                 <none>                                                                     28h
    kube-system          kube-controller-manager              <none>                                                                     28h
    kube-system          kube-dns                             10.44.1.18:53,10.44.2.23:53,10.44.1.18:53 + 3 more...                      28h
    kube-system          kube-scheduler                       <none>                                                                     28h
    kube-system          metrics-server                       10.44.0.13:443                                                             28h
    kube-system          monitoring-kube-prometheus-kubelet   172.25.132.230:10255,172.25.132.35:10255,172.25.132.55:10255 + 6 more...   20h
    local-path-storage   rancher.io-local-path                <none>                                                                     28h
    pf                   echoserver                           10.44.2.21:8080                                                            22h
    

    echoserver POD is accessible from all nodes

    ## access by service name is OK
    kubectl exec -it master curl http://echoserver.pf:8080 - is OK
    kubectl exec -it node1 curl http://echoserver.pf:8080 - is OK
    kubectl exec -it node2 curl http://echoserver.pf:8080 - is OK
    ## access by service IP is OK
    kubectl exec -it master curl http://10.45.214.208:8080 - is OK
    kubectl exec -it node1 curl http://10.45.214.208:8080 - is OK
    kubectl exec -it node2 curl http://10.45.214.208:8080 - is OK
    

    but kube-apiserver API is accessible only from MASTER POD !!!

    ## access to kube-apiserver from master is OK
    kubectl exec -it master -- curl -kv https://kubernetes.default/api - is OK
    kubectl exec -it master -- curl -kv https://172.25.132.35:6443/api - is OK
    
    
    ## access to kube-apiserver from RKE nodes - Operation timed out:
    kubectl exec -it node1 -- curl -kv https://kubernetes.default/api - ERROR
    kubectl exec -it node2 -- curl -kv https://kubernetes.default/api - ERROR
    *   Trying 10.45.0.1:443...
    * connect to 10.45.0.1 port 443 failed: Operation timed out
    * Failed to connect to kubernetes.default port 443: Operation timed out
    * Closing connection 0
    curl: (28) Failed to connect to kubernetes.default port 443: Operation timed out
    command terminated with exit code 28
    
    kubectl exec -it node1 -- curl -kv https://172.25.132.35:6443/api - ERROR
    kubectl exec -it node2 -- curl -kv https://172.25.132.35:6443/api - ERROR
    *   Trying 172.25.132.35:6443...
    * connect to 172.25.132.35 port 6443 failed: Operation timed out
    * Failed to connect to 172.25.132.35 port 6443: Operation timed out
    * Closing connection 0
    curl: (28) Failed to connect to 172.25.132.35 port 6443: Operation timed out
    command terminated with exit code 28
    

    below network settings of master-node (foundation-musanin-master)

    sudo iptables-save > ~/temp/20210720_iptables_master

    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
    *mangle
    :PREROUTING ACCEPT [7023874:6584966013]
    :INPUT ACCEPT [4419450:1084240089]
    :FORWARD ACCEPT [3616:218503]
    :OUTPUT ACCEPT [4385189:1292219696]
    :POSTROUTING ACCEPT [4388805:1292438199]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    COMMIT
    # Completed on Tue Jul 20 09:18:53 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
    *filter
    :INPUT ACCEPT [721284:174744895]
    :FORWARD ACCEPT [603:36368]
    :OUTPUT ACCEPT [714871:209627914]
    :DOCKER - [0:0]
    :DOCKER-ISOLATION-STAGE-1 - [0:0]
    :DOCKER-ISOLATION-STAGE-2 - [0:0]
    :DOCKER-USER - [0:0]
    :KILO-IPIP - [0:0]
    :KUBE-EXTERNAL-SERVICES - [0:0]
    :KUBE-FIREWALL - [0:0]
    :KUBE-FORWARD - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-SERVICES - [0:0]
    -A INPUT -j KUBE-FIREWALL
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
    -A INPUT -p ipv4 -m comment --comment "Kilo: jump to IPIP chain" -j KILO-IPIP
    -A INPUT -p ipv4 -m comment --comment "Kilo: reject other IPIP traffic" -j DROP
    -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
    -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A FORWARD -j DOCKER-USER
    -A FORWARD -j DOCKER-ISOLATION-STAGE-1
    -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o docker0 -j DOCKER
    -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
    -A FORWARD -i docker0 -o docker0 -j ACCEPT
    -A OUTPUT -j KUBE-FIREWALL
    -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
    -A DOCKER-ISOLATION-STAGE-1 -j RETURN
    -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
    -A DOCKER-ISOLATION-STAGE-2 -j RETURN
    -A DOCKER-USER -j RETURN
    -A KILO-IPIP -s 172.25.132.35/32 -m comment --comment "Kilo: allow IPIP traffic" -j ACCEPT
    -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
    -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
    -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
    -A KUBE-FORWARD -s 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A KUBE-FORWARD -d 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A KUBE-SERVICES -d 10.45.193.115/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
    COMMIT
    # Completed on Tue Jul 20 09:18:53 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
    *nat
    :PREROUTING ACCEPT [409724:899785231]
    :INPUT ACCEPT [1027:177582]
    :OUTPUT ACCEPT [7378:462422]
    :POSTROUTING ACCEPT [7501:469896]
    :DOCKER - [0:0]
    :KILO-NAT - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-MARK-DROP - [0:0]
    :KUBE-MARK-MASQ - [0:0]
    :KUBE-NODEPORTS - [0:0]
    :KUBE-POSTROUTING - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-SEP-2ZKD5TCM6OYP5Z4F - [0:0]
    :KUBE-SEP-AWXKJFKBRSHYCFIX - [0:0]
    :KUBE-SEP-CN66RQ6OOEA3MWHE - [0:0]
    :KUBE-SEP-DWYO5FEH5J22EHP6 - [0:0]
    :KUBE-SEP-OC25G42R336ONLNP - [0:0]
    :KUBE-SEP-P2V4QTKODS75TTCV - [0:0]
    :KUBE-SEP-QPYANCAVUCDJHCNQ - [0:0]
    :KUBE-SEP-TB7SM5ETL6QLHI7C - [0:0]
    :KUBE-SEP-YYRNE5UY2T7VHRUE - [0:0]
    :KUBE-SERVICES - [0:0]
    :KUBE-SVC-7ECNF7D6GS4GHP3A - [0:0]
    :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
    :KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
    :KUBE-SVC-LC5QY66VUV2HJ6WZ - [0:0]
    :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
    :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
    -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
    -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
    -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
    -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
    -A POSTROUTING -s 10.44.0.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
    -A DOCKER -i docker0 -j RETURN
    -A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.35/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.2.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.55/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.230/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
    -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
    -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
    -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
    -A KUBE-SEP-2ZKD5TCM6OYP5Z4F -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-2ZKD5TCM6OYP5Z4F -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:9153
    -A KUBE-SEP-AWXKJFKBRSHYCFIX -s 10.44.2.21/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-AWXKJFKBRSHYCFIX -p tcp -m tcp -j DNAT --to-destination 10.44.2.21:8080
    -A KUBE-SEP-CN66RQ6OOEA3MWHE -s 10.44.0.13/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-CN66RQ6OOEA3MWHE -p tcp -m tcp -j DNAT --to-destination 10.44.0.13:443
    -A KUBE-SEP-DWYO5FEH5J22EHP6 -s 172.25.132.35/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-DWYO5FEH5J22EHP6 -p tcp -m tcp -j DNAT --to-destination 172.25.132.35:6443
    -A KUBE-SEP-OC25G42R336ONLNP -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-OC25G42R336ONLNP -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:53
    -A KUBE-SEP-P2V4QTKODS75TTCV -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-P2V4QTKODS75TTCV -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:53
    -A KUBE-SEP-QPYANCAVUCDJHCNQ -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-QPYANCAVUCDJHCNQ -p udp -m udp -j DNAT --to-destination 10.44.2.23:53
    -A KUBE-SEP-TB7SM5ETL6QLHI7C -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-TB7SM5ETL6QLHI7C -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:9153
    -A KUBE-SEP-YYRNE5UY2T7VHRUE -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-YYRNE5UY2T7VHRUE -p udp -m udp -j DNAT --to-destination 10.44.1.18:53
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-SVC-LC5QY66VUV2HJ6WZ
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7ECNF7D6GS4GHP3A
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
    -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
    -A KUBE-SVC-7ECNF7D6GS4GHP3A -j KUBE-SEP-AWXKJFKBRSHYCFIX
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OC25G42R336ONLNP
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-P2V4QTKODS75TTCV
    -A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TB7SM5ETL6QLHI7C
    -A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-2ZKD5TCM6OYP5Z4F
    -A KUBE-SVC-LC5QY66VUV2HJ6WZ -j KUBE-SEP-CN66RQ6OOEA3MWHE
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-DWYO5FEH5J22EHP6
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YYRNE5UY2T7VHRUE
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-QPYANCAVUCDJHCNQ
    COMMIT
    

    sudo ifconfig

    docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
            ether 02:42:fd:d8:9f:77  txqueuelen 0  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.25.132.35  netmask 255.255.254.0  broadcast 172.25.133.255
            inet6 fe80::ef5:e143:acdd:a51a  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::2483:a3e7:45f:fe20  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::d43f:5c55:1e06:d744  prefixlen 64  scopeid 0x20<link>
            ether 00:15:5d:6a:82:4e  txqueuelen 1000  (Ethernet)
            RX packets 6930994  bytes 6920692627 (6.4 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 335105  bytes 346039095 (330.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
            inet 10.4.0.1  netmask 255.255.0.0  destination 10.4.0.1
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
            RX packets 20  bytes 3312 (3.2 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 23  bytes 2288 (2.2 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet 10.44.0.1  netmask 255.255.255.0  broadcast 10.44.0.255
            inet6 fe80::bc1c:b3ff:fe5d:e206  prefixlen 64  scopeid 0x20<link>
            ether be:1c:b3:5d:e2:06  txqueuelen 1000  (Ethernet)
            RX packets 137428  bytes 20489107 (19.5 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 135650  bytes 68464111 (65.2 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 4892759  bytes 1199043137 (1.1 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 4892759  bytes 1199043137 (1.1 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
            inet 10.44.0.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth48a778ff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::f0e7:acff:fef4:ee48  prefixlen 64  scopeid 0x20<link>
            ether f2:e7:ac:f4:ee:48  txqueuelen 0  (Ethernet)
            RX packets 89964  bytes 15910534 (15.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 89612  bytes 25614788 (24.4 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vetha48db6f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::9c48:ffff:fed2:3616  prefixlen 64  scopeid 0x20<link>
            ether 9e:48:ff:d2:36:16  txqueuelen 0  (Ethernet)
            RX packets 47376  bytes 6492961 (6.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 46019  bytes 42837623 (40.8 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethd75ae550: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::4a0:fbff:fe27:5a3f  prefixlen 64  scopeid 0x20<link>
            ether 06:a0:fb:27:5a:3f  txqueuelen 0  (Ethernet)
            RX packets 30  bytes 3138 (3.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 32  bytes 5997 (5.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    sudo ip a

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:15:5d:6a:82:4e brd ff:ff:ff:ff:ff:ff
        inet 172.25.132.35/23 brd 172.25.133.255 scope global noprefixroute dynamic eth0
           valid_lft 81355sec preferred_lft 81355sec
        inet6 fe80::d43f:5c55:1e06:d744/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
        inet6 fe80::ef5:e143:acdd:a51a/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
        inet6 fe80::2483:a3e7:45f:fe20/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:fd:d8:9f:77 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    4: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UP group default qlen 1000
        link/ether be:1c:b3:5d:e2:06 brd ff:ff:ff:ff:ff:ff
        inet 10.44.0.1/24 brd 10.44.0.255 scope global kube-bridge
           valid_lft forever preferred_lft forever
        inet6 fe80::bc1c:b3ff:fe5d:e206/64 scope link
           valid_lft forever preferred_lft forever
    5: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether 9e:48:ff:d2:36:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::9c48:ffff:fed2:3616/64 scope link
           valid_lft forever preferred_lft forever
    6: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether f2:e7:ac:f4:ee:48 brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet6 fe80::f0e7:acff:fef4:ee48/64 scope link
           valid_lft forever preferred_lft forever
    7: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
        link/none
        inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
           valid_lft forever preferred_lft forever
    8: [email protected]: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
        inet 10.44.0.1/32 brd 10.44.0.1 scope global tunl0
           valid_lft forever preferred_lft forever
    12: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether 06:a0:fb:27:5a:3f brd ff:ff:ff:ff:ff:ff link-netnsid 2
        inet6 fe80::4a0:fbff:fe27:5a3f/64 scope link
           valid_lft forever preferred_lft forever
    

    sudo ip r

    default via 172.25.132.1 dev eth0 proto dhcp metric 100
    10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
    10.44.0.0/24 dev kube-bridge proto kernel scope link src 10.44.0.1
    10.44.1.0/24 via 10.4.0.3 dev kilo0 proto static onlink
    10.44.2.0/24 via 10.4.0.2 dev kilo0 proto static onlink
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
    172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.35 metric 100
    

    sudo wg

    interface: kilo0
      public key: c7yyiSaA9nvLVFz60Rkr42+xdvC4BVPaGDKJ+5v5QTU=
      private key: (hidden)
      listening port: 51820
    
    peer: yA6LdCuJT7y+pRNvlhds8GeeEGoT1Q/PUhF++GZ8gB0=
      endpoint: 172.25.132.55:51820
      allowed ips: 10.44.2.0/24, 172.25.132.55/32, 10.4.0.2/32
      latest handshake: 1 hour, 37 minutes, 22 seconds ago
      transfer: 2.38 KiB received, 1.66 KiB sent
    
    peer: IErj++lf80jkWOEEVsH97G6tTbNGViCZ12s2Gedl5kg=
      endpoint: 172.25.132.230:51820
      allowed ips: 10.44.1.0/24, 172.25.132.230/32, 10.4.0.3/32
      latest handshake: 1 hour, 37 minutes, 22 seconds ago
      transfer: 880 B received, 592 B sent
    

    below network settings of node-1 (foundation-musanin-node-1)

    sudo iptables-save > ~/temp/20210720_iptables_node1

    
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
    *security
    :INPUT ACCEPT [97587:60160550]
    :FORWARD ACCEPT [36309:18066799]
    :OUTPUT ACCEPT [82224:74167649]
    COMMIT
    # Completed on Tue Jul 20 09:30:41 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
    *raw
    :PREROUTING ACCEPT [929995:1885420176]
    :OUTPUT ACCEPT [82339:74210968]
    COMMIT
    # Completed on Tue Jul 20 09:30:41 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
    *mangle
    :PREROUTING ACCEPT [3205552:6261897059]
    :INPUT ACCEPT [511213:559075107]
    :FORWARD ACCEPT [111629:55798071]
    :OUTPUT ACCEPT [334930:130814428]
    :POSTROUTING ACCEPT [446559:186612499]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    COMMIT
    # Completed on Tue Jul 20 09:30:41 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
    *filter
    :INPUT ACCEPT [8232:5406612]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [6668:2199433]
    :DOCKER - [0:0]
    :DOCKER-ISOLATION-STAGE-1 - [0:0]
    :DOCKER-ISOLATION-STAGE-2 - [0:0]
    :DOCKER-USER - [0:0]
    :KILO-IPIP - [0:0]
    :KUBE-EXTERNAL-SERVICES - [0:0]
    :KUBE-FIREWALL - [0:0]
    :KUBE-FORWARD - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-SERVICES - [0:0]
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
    -A INPUT -j KUBE-FIREWALL
    -A INPUT -p ipv4 -m comment --comment "Kilo: jump to IPIP chain" -j KILO-IPIP
    -A INPUT -p ipv4 -m comment --comment "Kilo: reject other IPIP traffic" -j DROP
    -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
    -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A FORWARD -j DOCKER-USER
    -A FORWARD -j DOCKER-ISOLATION-STAGE-1
    -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o docker0 -j DOCKER
    -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
    -A FORWARD -i docker0 -o docker0 -j ACCEPT
    -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT -j KUBE-FIREWALL
    -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
    -A DOCKER-ISOLATION-STAGE-1 -j RETURN
    -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
    -A DOCKER-ISOLATION-STAGE-2 -j RETURN
    -A DOCKER-USER -j RETURN
    -A KILO-IPIP -s 172.25.132.55/32 -m comment --comment "Kilo: allow IPIP traffic" -j ACCEPT
    -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
    -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
    -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
    -A KUBE-FORWARD -s 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A KUBE-FORWARD -d 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A KUBE-SERVICES -d 10.45.193.115/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
    COMMIT
    # Completed on Tue Jul 20 09:30:41 2021
    # Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
    *nat
    :PREROUTING ACCEPT [54265:145774843]
    :INPUT ACCEPT [193:32980]
    :OUTPUT ACCEPT [277:20460]
    :POSTROUTING ACCEPT [277:20460]
    :DOCKER - [0:0]
    :KILO-NAT - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-MARK-DROP - [0:0]
    :KUBE-MARK-MASQ - [0:0]
    :KUBE-NODEPORTS - [0:0]
    :KUBE-POSTROUTING - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-SEP-2ZKD5TCM6OYP5Z4F - [0:0]
    :KUBE-SEP-AWXKJFKBRSHYCFIX - [0:0]
    :KUBE-SEP-CN66RQ6OOEA3MWHE - [0:0]
    :KUBE-SEP-DWYO5FEH5J22EHP6 - [0:0]
    :KUBE-SEP-OC25G42R336ONLNP - [0:0]
    :KUBE-SEP-P2V4QTKODS75TTCV - [0:0]
    :KUBE-SEP-QPYANCAVUCDJHCNQ - [0:0]
    :KUBE-SEP-TB7SM5ETL6QLHI7C - [0:0]
    :KUBE-SEP-YYRNE5UY2T7VHRUE - [0:0]
    :KUBE-SERVICES - [0:0]
    :KUBE-SVC-7ECNF7D6GS4GHP3A - [0:0]
    :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
    :KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
    :KUBE-SVC-LC5QY66VUV2HJ6WZ - [0:0]
    :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
    :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
    -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
    -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
    -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
    -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
    -A POSTROUTING -s 10.44.2.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
    -A DOCKER -i docker0 -j RETURN
    -A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.35/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.2.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.55/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
    -A KILO-NAT -d 10.44.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 172.25.132.230/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
    -A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
    -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
    -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
    -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
    -A KUBE-SEP-2ZKD5TCM6OYP5Z4F -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-2ZKD5TCM6OYP5Z4F -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:9153
    -A KUBE-SEP-AWXKJFKBRSHYCFIX -s 10.44.2.21/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-AWXKJFKBRSHYCFIX -p tcp -m tcp -j DNAT --to-destination 10.44.2.21:8080
    -A KUBE-SEP-CN66RQ6OOEA3MWHE -s 10.44.0.13/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-CN66RQ6OOEA3MWHE -p tcp -m tcp -j DNAT --to-destination 10.44.0.13:443
    -A KUBE-SEP-DWYO5FEH5J22EHP6 -s 172.25.132.35/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-DWYO5FEH5J22EHP6 -p tcp -m tcp -j DNAT --to-destination 172.25.132.35:6443
    -A KUBE-SEP-OC25G42R336ONLNP -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-OC25G42R336ONLNP -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:53
    -A KUBE-SEP-P2V4QTKODS75TTCV -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-P2V4QTKODS75TTCV -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:53
    -A KUBE-SEP-QPYANCAVUCDJHCNQ -s 10.44.2.23/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-QPYANCAVUCDJHCNQ -p udp -m udp -j DNAT --to-destination 10.44.2.23:53
    -A KUBE-SEP-TB7SM5ETL6QLHI7C -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-TB7SM5ETL6QLHI7C -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:9153
    -A KUBE-SEP-YYRNE5UY2T7VHRUE -s 10.44.1.18/32 -j KUBE-MARK-MASQ
    -A KUBE-SEP-YYRNE5UY2T7VHRUE -p udp -m udp -j DNAT --to-destination 10.44.1.18:53
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-SVC-LC5QY66VUV2HJ6WZ
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7ECNF7D6GS4GHP3A
    -A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
    -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
    -A KUBE-SVC-7ECNF7D6GS4GHP3A -j KUBE-SEP-AWXKJFKBRSHYCFIX
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OC25G42R336ONLNP
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-P2V4QTKODS75TTCV
    -A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TB7SM5ETL6QLHI7C
    -A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-2ZKD5TCM6OYP5Z4F
    -A KUBE-SVC-LC5QY66VUV2HJ6WZ -j KUBE-SEP-CN66RQ6OOEA3MWHE
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-DWYO5FEH5J22EHP6
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YYRNE5UY2T7VHRUE
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-QPYANCAVUCDJHCNQ
    COMMIT
    # Completed on Tue Jul 20 09:30:41 2021
    

    sudo ifconfig

    docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
            ether 02:42:d3:e3:11:ff  txqueuelen 0  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.25.132.55  netmask 255.255.254.0  broadcast 172.25.133.255
            inet6 fe80::ef5:e143:acdd:a51a  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::2483:a3e7:45f:fe20  prefixlen 64  scopeid 0x20<link>
            inet6 fe80::d43f:5c55:1e06:d744  prefixlen 64  scopeid 0x20<link>
            ether 00:15:5d:6a:82:4f  txqueuelen 1000  (Ethernet)
            RX packets 7373002  bytes 7733829348 (7.2 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 267181  bytes 71056769 (67.7 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
            inet 10.4.0.2  netmask 255.255.0.0  destination 10.4.0.2
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
            RX packets 131  bytes 17188 (16.7 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 126  bytes 19100 (18.6 KiB)
            TX errors 1  dropped 0 overruns 0  carrier 0  collisions 0
    
    kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet 10.44.2.1  netmask 255.255.255.0  broadcast 10.44.2.255
            inet6 fe80::1487:8cff:fe05:c2a8  prefixlen 64  scopeid 0x20<link>
            ether 16:87:8c:05:c2:a8  txqueuelen 1000  (Ethernet)
            RX packets 93131  bytes 6175021 (5.8 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 96966  bytes 66719614 (63.6 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 157924  bytes 81506918 (77.7 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 157924  bytes 81506918 (77.7 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
            inet 10.44.2.1  netmask 255.255.255.255
            tunnel   txqueuelen 1000  (IPIP Tunnel)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth2c02f12a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::88b3:b4ff:fe84:309e  prefixlen 64  scopeid 0x20<link>
            ether 8a:b3:b4:84:30:9e  txqueuelen 0  (Ethernet)
            RX packets 71509  bytes 5683947 (5.4 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 72642  bytes 25937437 (24.7 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth6dd5fd0c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::48dd:7cff:fea4:b183  prefixlen 64  scopeid 0x20<link>
            ether 4a:dd:7c:a4:b1:83  txqueuelen 0  (Ethernet)
            RX packets 51  bytes 4469 (4.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 37  bytes 5999 (5.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    veth9acbfe8d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::f4b5:afff:fe7c:2599  prefixlen 64  scopeid 0x20<link>
            ether f6:b5:af:7c:25:99  txqueuelen 0  (Ethernet)
            RX packets 86  bytes 13277 (12.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 144  bytes 10648 (10.3 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    vethc96781bd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
            inet6 fe80::bca9:9dff:fea5:7ebc  prefixlen 64  scopeid 0x20<link>
            ether be:a9:9d:a5:7e:bc  txqueuelen 0  (Ethernet)
            RX packets 21413  bytes 1776386 (1.6 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 24271  bytes 40768549 (38.8 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    sudo ip a

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:15:5d:6a:82:4f brd ff:ff:ff:ff:ff:ff
        inet 172.25.132.55/23 brd 172.25.133.255 scope global noprefixroute dynamic eth0
           valid_lft 80954sec preferred_lft 80954sec
        inet6 fe80::d43f:5c55:1e06:d744/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
        inet6 fe80::ef5:e143:acdd:a51a/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
        inet6 fe80::2483:a3e7:45f:fe20/64 scope link tentative noprefixroute dadfailed
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:d3:e3:11:ff brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    4: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UP group default qlen 1000
        link/ether 16:87:8c:05:c2:a8 brd ff:ff:ff:ff:ff:ff
        inet 10.44.2.1/24 brd 10.44.2.255 scope global kube-bridge
           valid_lft forever preferred_lft forever
        inet6 fe80::1487:8cff:fe05:c2a8/64 scope link
           valid_lft forever preferred_lft forever
    5: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether f6:b5:af:7c:25:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::f4b5:afff:fe7c:2599/64 scope link
           valid_lft forever preferred_lft forever
    6: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether be:a9:9d:a5:7e:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet6 fe80::bca9:9dff:fea5:7ebc/64 scope link
           valid_lft forever preferred_lft forever
    7: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether 8a:b3:b4:84:30:9e brd ff:ff:ff:ff:ff:ff link-netnsid 2
        inet6 fe80::88b3:b4ff:fe84:309e/64 scope link
           valid_lft forever preferred_lft forever
    8: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
        link/none
        inet 10.4.0.2/16 brd 10.4.255.255 scope global kilo0
           valid_lft forever preferred_lft forever
    9: [email protected]: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
        inet 10.44.2.1/32 brd 10.44.2.1 scope global tunl0
           valid_lft forever preferred_lft forever
    17: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
        link/ether 4a:dd:7c:a4:b1:83 brd ff:ff:ff:ff:ff:ff link-netnsid 3
        inet6 fe80::48dd:7cff:fea4:b183/64 scope link
           valid_lft forever preferred_lft forever
    

    sudo ip r

    default via 172.25.132.1 dev eth0 proto dhcp metric 100
    10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.2
    10.44.0.0/24 via 10.4.0.1 dev kilo0 proto static onlink
    10.44.1.0/24 via 10.4.0.3 dev kilo0 proto static onlink
    10.44.2.0/24 dev kube-bridge proto kernel scope link src 10.44.2.1
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
    172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.55 metric 100
    

    sudo wg

    interface: kilo0
      public key: yA6LdCuJT7y+pRNvlhds8GeeEGoT1Q/PUhF++GZ8gB0=
      private key: (hidden)
      listening port: 51820
    
    peer: IErj++lf80jkWOEEVsH97G6tTbNGViCZ12s2Gedl5kg=
      endpoint: 172.25.132.230:51820
      allowed ips: 10.44.1.0/24, 172.25.132.230/32, 10.4.0.3/32
      latest handshake: 1 hour, 44 minutes, 22 seconds ago
      transfer: 2.30 KiB received, 2.35 KiB sent
    
    peer: c7yyiSaA9nvLVFz60Rkr42+xdvC4BVPaGDKJ+5v5QTU=
      endpoint: 172.25.132.35:51820
      allowed ips: 10.44.0.0/24, 172.25.132.35/32, 10.4.0.1/32
      latest handshake: 1 hour, 44 minutes, 29 seconds ago
      transfer: 1.66 KiB received, 2.38 KiB sent
    

    kgctl graph

    digraph kilo {
            label="10.4.0.0/16";
            labelloc=t;
            outputorder=nodesfirst;
            overlap=false;
            "foundation-musanin-master"->"foundation-musanin-node-1"[ dir=both ];
            "foundation-musanin-master"->"foundation-musanin-node-2"[ dir=both ];
            "foundation-musanin-node-1"->"foundation-musanin-node-2"[ dir=both ];
            subgraph "cluster_location_location:foundation-musanin-master" {
            label="location:foundation-musanin-master";
            style="dashed,rounded";
            "foundation-musanin-master" [ label="location:foundation-musanin-master\nfoundation-musanin-master\n10.44.0.0/24\n172.25.132.35\n10.4.0.1\n172.25.132.35:51820", rank=1, shape=ellipse ];
    
    }
    ;
            subgraph "cluster_location_location:foundation-musanin-node-1" {
            label="location:foundation-musanin-node-1";
            style="dashed,rounded";
            "foundation-musanin-node-1" [ label="location:foundation-musanin-node-1\nfoundation-musanin-node-1\n10.44.2.0/24\n172.25.132.55\n10.4.0.2\n172.25.132.55:51820", rank=1, shape=ellipse ];
    
    }
    ;
            subgraph "cluster_location_location:foundation-musanin-node-2" {
            label="location:foundation-musanin-node-2";
            style="dashed,rounded";
            "foundation-musanin-node-2" [ label="location:foundation-musanin-node-2\nfoundation-musanin-node-2\n10.44.1.0/24\n172.25.132.230\n10.4.0.3\n172.25.132.230:51820", rank=1, shape=ellipse ];
    
    }
    ;
            subgraph "cluster_peers" {
            label="peers";
            style="dashed,rounded";
    
    }
    ;
    
    }
    

    I got stuck with iptables routing and unclear how to set up access from node-1 (foundation-musanin-node-1) to kube-apiserver (kubernetes.default -> 10.45.0.1:443 -> 172.25.132.35:6443)

    Hope for your help.

    Thanks in advance, Vladimir.

    opened by vladimir22 5
  • [proposal] print warning if `kg` and `kgctl` have different version when using `kgctl`

    [proposal] print warning if `kg` and `kgctl` have different version when using `kgctl`

    Maybe this will prevent Issues like #216. This will probably require an additional annotation of the nodes. If we decide to implement this, maybe we should first move all annotations exclusively used by Kilo to a single json encoded annotation to prevent confusion with annotations that can be used to configure Kilo. wdyt?

    opened by leonnicolas 0
Releases(0.3.1)
  • 0.3.1(Aug 19, 2021)

    Version 0.3.1 fixes a bug with the scoping of Kilo's Peer CustomResourceDefinition, which incorrectly caused the Peers to be namespaced (#226). Note: to upgrade from the affected version of Kilo, 0.3.0, take the following steps:

    1. delete the old Kilo Peer CRD: kubectl delete crd peers.kilo.squat.ai; and
    2. apply the Kilo Peer CRD manifest: kubectl apply -f https://raw.githubusercontent.com/squat/kilo/0.3.1/manifests/crds.yaml.
    Source code(tar.gz)
    Source code(zip)
    kgctl-darwin-amd64(44.38 MB)
    kgctl-darwin-arm64(43.58 MB)
    kgctl-linux-amd64(45.10 MB)
    kgctl-linux-arm(38.99 MB)
    kgctl-linux-arm64(42.54 MB)
    kgctl-windows-amd64(45.43 MB)
  • 0.3.0(Jul 8, 2021)

    Version 0.3.0 includes additions to the docs, some bug fixes, and the following major features:

    • support NAT to NAT communication via UDP hole punching thanks to #146 and @JulienVdG
    • upgrade the Peer CRD to apiextension v1 instead of the deprecated apiextension v1beta1 #186 Note: Kilo now requires users to deploy the Peer CRD manually; to upgrade an existing cluster, take the following steps:
      1. update the Kilo image;
      2. delete the old Kilo Peer CRD: kubectl delete crd peers.kilo.squat.ai; and
      3. apply the Kilo Peer CRD manifest: kubectl apply -f https://raw.githubusercontent.com/squat/kilo/0.3.0/manifests/crds.yaml
    • publish kgctl binaries for Apple's M1 architecture #187
    • introduced end to end tests
    • automatically detect the granularity of the Kilo mesh; so no more need for kgctl --mesh-granularity full #197
    • support configuring nodes as gateways to allowed IPs outside the cluster #164
    Source code(tar.gz)
    Source code(zip)
    kgctl-darwin-amd64(44.38 MB)
    kgctl-darwin-arm64(43.58 MB)
    kgctl-linux-amd64(45.10 MB)
    kgctl-linux-arm(38.99 MB)
    kgctl-linux-arm64(42.54 MB)
    kgctl-windows-amd64(45.43 MB)
  • 0.2.0(Apr 15, 2021)

    Version 0.2.0 of Kilo includes several bug fixes and the following major features:

    • enable peers to use DNS names as their endpoints
    • support building and running the kgctl binary on Darwin and Windows
    • allow specifying a custom topology label on nodes
    • enable running Kilo with userspace WireGuard
    • automatically detect nodes with no private IPs and place them into unique logical locations
    • reduce calls to iptables by caching lookups
    • add a --resync-period flag to control the update time between reconciliation
    • manually disable private IP addresses with an annotation
    Source code(tar.gz)
    Source code(zip)
    kgctl-darwin-amd64(32.94 MB)
    kgctl-linux-amd64(33.86 MB)
    kgctl-linux-arm(28.82 MB)
    kgctl-linux-arm64(31.72 MB)
    kgctl-windows-amd64(33.57 MB)
  • 0.1.0(Sep 15, 2020)

    Version 0.1.0 marks the first official release of the Kilo project. To date, Kilo supports the following major features:

    • creating multi-cloud and multi-region Kubernetes clusters;
    • defining custom mesh topologies;
    • allowing independent WireGuard peers to join the mesh, including other Kubernetes clusters;
    • operating Kilo on top of Flannel for greater compatibility;
    • functioning in an interoperable manner with Kubernetes NetworkPolicies; and
    • analyzing the WireGuard mesh with a custom CLI utility, i.e. kgctl.

    For more information and documentation, please take a look at the Kilo documentation at https://kilo.squat.ai.

    Source code(tar.gz)
    Source code(zip)
Owner
Lucas Servén Marín
working on Kubernetes, Prometheus, and Thanos
Lucas Servén Marín
K8s controller implementing Multi-Cluster Services API based on AWS Cloud Map.

AWS Cloud Map MCS Controller for K8s Introduction AWS Cloud Map multi-cluster service discovery for Kubernetes (K8s) is a controller that implements e

Amazon Web Services 11 Oct 16, 2021
GitHub中文排行榜,帮助你发现高分优秀中文项目、更高效地吸收国人的优秀经验成果;榜单每周更新一次,敬请关注!

榜单设立目的 ???? GitHub中文排行榜,帮助你发现高分优秀中文项目; 各位开发者伙伴可以更高效地吸收国人的优秀经验、成果; 中文项目只能满足阶段性的需求,想要有进一步提升,还请多花时间学习高分神级英文项目; 榜单设立范围 设立1个总榜(所有语言项目汇总排名)、18个分榜(单个语言项目排名);

kon9chunkit 38.9k Oct 24, 2021
go-zero 从零到 k8s 部署

启动: 注意事项: dockerfile 文件配置了 LOCAL_HOST 环境变量 1、项目目录下执行 ./docker.sh 脚本生成 rpc服务docker镜像 ./docker.sh 2、docker-compose-db 创建 mysql redis etcd 容器 执行命令

liukai 56 Oct 14, 2021
The k8s-generic-webhook is a library to simplify the implementation of webhooks for arbitrary customer resources (CR) in the operator-sdk or controller-runtime.

k8s-generic-webhook The k8s-generic-webhook is a library to simplify the implementation of webhooks for arbitrary customer resources (CR) in the opera

Norwin Schnyder 3 Oct 14, 2021
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Kubernetes 82.1k Oct 25, 2021
Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Kubernetes SIGs 8.6k Oct 15, 2021
A Kubernetes Network Fabric for Enterprises that is Rich in Functions and Easy in Operations

中文教程 Kube-OVN, a CNCF Sandbox Level Project, integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network

null 1k Oct 16, 2021
A k8s vault webhook is a Kubernetes webhook that can inject secrets into Kubernetes resources by connecting to multiple secret managers

k8s-vault-webhook is a Kubernetes admission webhook which listen for the events related to Kubernetes resources for injecting secret directly from sec

Opstree Container Kit 97 Oct 15, 2021
Simple Kubernetes real-time dashboard and management.

Skooner - Kubernetes Dashboard We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to ref

null 859 Oct 20, 2021
Kubernetes Cluster API Provider AWS

Kubernetes Cluster API Provider AWS Kubernetes-native declarative infrastructure for AWS. What is the Cluster API Provider AWS The Cluster API brings

null 0 Oct 23, 2021
kubectl plugin for signing Kubernetes manifest YAML files with sigstore

k8s-manifest-sigstore kubectl plugin for signing Kubernetes manifest YAML files with sigstore ⚠️ Still under developement, not ready for production us

sigstore 17 Oct 6, 2021
Kubernetes Operator for MySQL NDB Cluster.

MySQL NDB Operator The MySQL NDB Operator is a Kubernetes operator for managing a MySQL NDB Cluster setup inside a Kubernetes Cluster. This is in prev

MySQL 8 Oct 16, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

Red Hat Communities of Practice 2 Oct 14, 2021
Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Kubedd – Check migration issues of Kubernetes Objects while K8s upgrade

Devtron Labs 95 Oct 20, 2021
An operator to support Haschicorp Vault configuration workflows from within Kubernetes

Vault Config Operator This operator helps set up Vault Configurations. The main intent is to do so such that subsequently pods can consume the secrets

null 0 Oct 18, 2021
Enterprise-grade container platform tailored for multicloud and multi-cluster management

KubeSphere Container Platform What is KubeSphere English | 中文 KubeSphere is a distributed operating system providing cloud native stack with Kubernete

KubeSphere 7.1k Oct 21, 2021
Simplified network and services for edge applications

English | 简体中文 EdgeMesh Introduction EdgeMesh is a part of KubeEdge, and provides a simple network solution for the inter-communications between servi

KubeEdge 59 Oct 15, 2021
A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging Docker container and Kubernetes with visual web UI. You can start the debugging journey on any docker container host! You can

CloudNativer 7 Oct 9, 2021
network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of kubernetes.

Network Node Manager network-node-manager is a kubernetes controller that controls the network configuration of a node to resolve network issues of ku

kakao 89 Oct 20, 2021