A scalable overlay networking tool with a focus on performance, simplicity and security

Related tags

nebula
Overview

What is Nebula?

Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.

Nebula incorporates a number of existing concepts like encryption, security groups, certificates, and tunneling, and each of those individual pieces existed before Nebula in various forms. What makes Nebula different to existing offerings is that it brings all of these ideas together, resulting in a sum that is greater than its individual parts.

You can read more about Nebula here.

You can also join the NebulaOSS Slack group here

Supported Platforms

Desktop and Server

Check the releases page for downloads

  • Linux - 64 and 32 bit, arm, and others
  • Windows
  • MacOS
  • Freebsd

Mobile

Technical Overview

Nebula is a mutually authenticated peer-to-peer software defined network based on the Noise Protocol Framework. Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups. Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes. Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs. Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.

Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.

Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.

Getting started (quickly)

To set up a Nebula network, you'll need:

1. The Nebula binaries for your specific platform. Specifically you'll need nebula-cert and the specific nebula binary for each platform you use.

2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.

Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo DigitalOcean droplets as lighthouses.

Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.

3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.

./nebula-cert ca -name "Myorganization, Inc"

This will create files named ca.key and ca.cert in the current directory. The ca.key file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.

4. Nebula host keys and certificates generated from that certificate authority

This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"

5. Configuration files for each host

Download a copy of the nebula example configuration.

  • On the lighthouse node, you'll need to ensure am_lighthouse: true is set.

  • On the individual hosts, ensure the lighthouse is defined properly in the static_host_map section, and is added to the lighthouse hosts section.

6. Copy nebula credentials, configuration, and binaries to each host

For each host, copy the nebula binary to the host, along with config.yaml from step 5, and the files ca.crt, {host}.crt, and {host}.key from step 4.

DO NOT COPY ca.key TO INDIVIDUAL NODES.

7. Run nebula on each host

./nebula -config /path/to/config.yaml

Building Nebula from source

Download go and clone this repo. Change to the nebula directory.

To build nebula for all platforms: make all

To build nebula for a specific platform (ex, Windows): make bin-windows

See the Makefile for more details on build targets

Credits

Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

Issues
  • Node outside of LAN can only talk to light house

    Node outside of LAN can only talk to light house

    I have a bunch of computers on my LAN with one light house that is accessible from the outside world Lighthouse: 192.168.42.99 (mydomain.com:4242) Lan Machine 1 (A) : 192.168.42.200 Lan Machine 2 (B): 192.168.42.203

    Outside lan machine (C): 192.168.42.10

    using the 192.168.42.0 IPs:

    • A, B and lighthouse can ping each other without any issue
    • C can ping the lighthouse but not A nor B
    • A and B can't ping C
    • Light house can ping C

    Light house config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/pihole.crt
      key: /etc/nebula/pihole.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["mydomain.com:4242"]
    
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: true
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      # serve_dns: true
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
              #  - "192.168.42.1"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 4242
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
    

    C config:

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /etc/nebula/ca.crt
      cert: /etc/nebula/work.crt
      key: /etc/nebula/work.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "192.168.42.99": ["ftpix.com:4242"]
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: false
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      #serve_dns: false
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      hosts:
        - "192.168.42.99"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 0
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    # punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as symmetric
    punch_back: true
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: icmp
          host: any
    
        # Allow tcp/443 from any host with BOTH laptop and home group
        - port: any
          proto: tcp
          host: any
    
        - port: any
          proto: udp
          host: any
    
    

    Logs from C:

    Dec 05 15:55:20 gz-t480 nebula[32698]: time="2019-12-05T15:55:20+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:22 gz-t480 nebula[32698]: time="2019-12-05T15:55:22+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:23 gz-t480 nebula[32698]: time="2019-12-05T15:55:23+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.200.198:52803" vpnIp=192.168.42.198
    Dec 05 15:55:25 gz-t480 nebula[32698]: time="2019-12-05T15:55:25+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:27 gz-t480 nebula[32698]: time="2019-12-05T15:55:27+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:29 gz-t480 nebula[32698]: time="2019-12-05T15:55:29+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:31 gz-t480 nebula[32698]: time="2019-12-05T15:55:31+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:52803" vpnIp=192.168.42.198
    Dec 05 15:55:33 gz-t480 nebula[32698]: time="2019-12-05T15:55:33+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:35 gz-t480 nebula[32698]: time="2019-12-05T15:55:35+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:38 gz-t480 nebula[32698]: time="2019-12-05T15:55:38+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:58904" vpnIp=192.168.42.198
    Dec 05 15:55:40 gz-t480 nebula[32698]: time="2019-12-05T15:55:40+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:58904" vpnIp=192.168.42.198
    
    opened by lamarios 30
  • MIPS64

    MIPS64

    Are there any plans on supporting mips64?

    enhancement 
    opened by swampmonster 18
  • Unbreak building for FreeBSD

    Unbreak building for FreeBSD

    I naively copied darwin files to unbreak building FreeBSD binaries. The other thing is that upstream version of water library doesn't support FreeBSD. There is a fork with added FreeBSD support https://github.com/yggdrasil-network/water and work in progress pull request to upstream: https://github.com/songgao/water/pull/37

    After these dirty hacks I'm able to start nebula on FreeBSD hosts but no traffic is passed between them:

    $ sudo ./nebula -config config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:a
    ny ip:<nil> proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:a
    ny ip:<nil> proto:1 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop
     home] host: ip:<nil> proto:6 startPort:443]"
    INFO[0000] Firewall started                              firewallHash=853d3005de969aa0cb1100731e983a740ab4218f89c78189edd389ff5e05ae99
    INFO[0000] Main HostMap created                          network=192.168.100.2/24 preferredRanges="[192.168.0.0/24]"
    INFO[0000] UDP hole punching enabled
    command: ifconfig tap0 192.168.100.2/24 192.168.100.2
    command: ifconfig tap0 mtu 1300
    INFO[0000] Nebula interface is active                    build=dev+20191217111808 interface=tap0 network=192.168.100.2/24
    INFO[0000] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3879127975 remoteIndex=0
     udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    INFO[0000] Handshake message received                    durationNs=446865780 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=387
    9127975 remoteIndex=3879127975 responderIndex=834573217 udpAddr="188.116.33.203:4242" vpnIp=192.168.100.1
    

    tap0 interface is configured correctly:

    tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1300
            options=80000<LINKSTATE>
            ether 58:9c:fc:10:ff:96
            inet 192.168.100.2 netmask 0xffffff00 broadcast 192.168.100.2
            groups: tap
            media: Ethernet autoselect
            status: active
            nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
            Opened by PID 42831
    
    [email protected] ~/nebula/build/freebsd (support-freebsd*) $ netstat -rn4
    Routing tables
    
    Internet:
    Destination        Gateway            Flags     Netif Expire
    default            192.168.0.2        UGS        igb0
    127.0.0.1          link#5             UH          lo0
    192.168.0.0/24     link#1             U          igb0
    192.168.0.11       link#1             UHS         lo0
    192.168.100.0/24   link#6             U          tap0
    192.168.100.2      link#6             UHS         lo0
    

    There's no response for who-has requests:

    [email protected] ~/nebula/build/freebsd (support-freebsd*) $ sudo tcpdump -i tap0
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes
    12:55:38.490465 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:39.532137 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    12:55:40.559399 ARP, Request who-has 192.168.100.1 tell 192.168.100.2, length 28
    

    Dropping it here with hope that someone would be willing to pick-up and continue this effort. I was testing on few weeks old CURRENT:

    FreeBSD monster-1 13.0-CURRENT FreeBSD 13.0-CURRENT #5 1b501770dd3-c264495(master): Wed Nov 27 01:35:34 CET 2019 [email protected]:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64

    opened by mateuszkwiatkowski 16
  • Low transmission efficiency on moderate/low host

    Low transmission efficiency on moderate/low host

    Using version: dev+20191224233758

    CPU Model:             Intel Core i7 9xx (Nehalem Class Core i7)
    CPU Cache Size:    4096 KB
    CPU Number:          1 vCPU
    Memory Usage:          208.73 MB / 985.53 MB
    

    Machine info generated by LemonBench. Command: curl -fsL https://ilemonra.in/LemonBenchIntl | bash -s fast

    Using iperf3 for test: server: iperf3 -s client: iperf3 -c [ip] -P 10

    2 hosts are located in the same DataCenter and both of them have up to 1Gbps bandwidth.

    TCP Raw (dierct transmission):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   112 MBytes  94.1 Mbits/sec  127             sender
    [  4]   0.00-10.00  sec   111 MBytes  93.4 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  94.8 MBytes  79.5 Mbits/sec  111             sender
    [  6]   0.00-10.00  sec  94.1 MBytes  78.9 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  87.6 MBytes  73.5 Mbits/sec  128             sender
    [  8]   0.00-10.00  sec  86.9 MBytes  72.9 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  79.2 MBytes  66.4 Mbits/sec  115             sender
    [ 10]   0.00-10.00  sec  78.5 MBytes  65.9 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  81.7 MBytes  68.5 Mbits/sec  108             sender
    [ 12]   0.00-10.00  sec  80.8 MBytes  67.7 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec   130 MBytes   109 Mbits/sec  114             sender
    [ 14]   0.00-10.00  sec   129 MBytes   108 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec   100 MBytes  84.0 Mbits/sec  117             sender
    [ 16]   0.00-10.00  sec  99.4 MBytes  83.4 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  98.1 MBytes  82.3 Mbits/sec   79             sender
    [ 18]   0.00-10.00  sec  97.5 MBytes  81.8 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec   105 MBytes  88.1 Mbits/sec  137             sender
    [ 20]   0.00-10.00  sec   104 MBytes  87.5 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  99.6 MBytes  83.5 Mbits/sec  144             sender
    [ 22]   0.00-10.00  sec  98.6 MBytes  82.7 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec   989 MBytes   829 Mbits/sec  1180             sender
    [SUM]   0.00-10.00  sec   981 MBytes   823 Mbits/sec                  receiver
    

    UDP Raw: Command: iperf3 -c [IP] -u -b 80M -P 10

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  4]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.881 ms  3/39 (7.7%)  
    [  4] Sent 39 datagrams
    [  6]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.886 ms  3/39 (7.7%)  
    [  6] Sent 39 datagrams
    [  8]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.885 ms  3/39 (7.7%)  
    [  8] Sent 39 datagrams
    [ 10]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.875 ms  3/38 (7.9%)  
    [ 10] Sent 38 datagrams
    [ 12]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.841 ms  3/40 (7.5%)  
    [ 12] Sent 40 datagrams
    [ 14]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.772 ms  7/46 (15%)  
    [ 14] Sent 46 datagrams
    [ 16]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.829 ms  7/44 (16%)  
    [ 16] Sent 44 datagrams
    [ 18]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.828 ms  2/37 (5.4%)  
    [ 18] Sent 37 datagrams
    [ 20]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.829 ms  2/37 (5.4%)  
    [ 20] Sent 37 datagrams
    [ 22]   0.00-10.00  sec  94.7 MBytes  79.4 Mbits/sec  0.795 ms  6/43 (14%)  
    [ 22] Sent 43 datagrams
    [SUM]   0.00-10.00  sec   947 MBytes   794 Mbits/sec  0.842 ms  39/402 (9.7%)  
    

    Command: iperf3 -c [IP] -u -b 100M -P 10

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  4]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.217 ms  0/43 (0%)  
    [  4] Sent 43 datagrams
    [  6]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.290 ms  54/100 (54%)  
    [  6] Sent 100 datagrams
    [  8]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.400 ms  49/93 (53%)  
    [  8] Sent 93 datagrams
    [ 10]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.326 ms  9/53 (17%)  
    [ 10] Sent 53 datagrams
    [ 12]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.244 ms  0/43 (0%)  
    [ 12] Sent 43 datagrams
    [ 14]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.462 ms  52/97 (54%)  
    [ 14] Sent 97 datagrams
    [ 16]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.358 ms  22/68 (32%)  
    [ 16] Sent 68 datagrams
    [ 18]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.667 ms  123/167 (74%)  
    [ 18] Sent 167 datagrams
    [ 20]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.509 ms  51/96 (53%)  
    [ 20] Sent 96 datagrams
    [ 22]   0.00-10.00  sec   118 MBytes  99.0 Mbits/sec  1.238 ms  0/42 (0%)  
    [ 22] Sent 42 datagrams
    [SUM]   0.00-10.00  sec  1.15 GBytes   990 Mbits/sec  1.371 ms  360/802 (45%)  
    

    Wireguard:

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  40.7 MBytes  34.2 Mbits/sec  166             sender
    [  4]   0.00-10.00  sec  40.3 MBytes  33.8 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  86.7 MBytes  72.7 Mbits/sec  536             sender
    [  6]   0.00-10.00  sec  85.3 MBytes  71.5 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  43.7 MBytes  36.7 Mbits/sec  183             sender
    [  8]   0.00-10.00  sec  43.3 MBytes  36.3 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  38.2 MBytes  32.0 Mbits/sec  129             sender
    [ 10]   0.00-10.00  sec  37.7 MBytes  31.6 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  37.7 MBytes  31.6 Mbits/sec  127             sender
    [ 12]   0.00-10.00  sec  37.3 MBytes  31.3 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  38.5 MBytes  32.3 Mbits/sec  125             sender
    [ 14]   0.00-10.00  sec  38.1 MBytes  31.9 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  34.2 MBytes  28.7 Mbits/sec  133             sender
    [ 16]   0.00-10.00  sec  33.9 MBytes  28.4 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  36.3 MBytes  30.5 Mbits/sec  178             sender
    [ 18]   0.00-10.00  sec  35.9 MBytes  30.1 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  33.7 MBytes  28.2 Mbits/sec  104             sender
    [ 20]   0.00-10.00  sec  33.3 MBytes  28.0 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  27.8 MBytes  23.4 Mbits/sec   87             sender
    [ 22]   0.00-10.00  sec  27.5 MBytes  23.1 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec   418 MBytes   350 Mbits/sec  1768             sender
    [SUM]   0.00-10.00  sec   413 MBytes   346 Mbits/sec                  receiver
    

    Nebula (using default):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  2.94 MBytes  2.47 Mbits/sec   89             sender
    [  4]   0.00-10.00  sec  2.81 MBytes  2.36 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  2.40 MBytes  2.02 Mbits/sec   94             sender
    [  6]   0.00-10.00  sec  2.28 MBytes  1.91 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  2.64 MBytes  2.22 Mbits/sec   71             sender
    [  8]   0.00-10.00  sec  2.51 MBytes  2.10 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  2.99 MBytes  2.51 Mbits/sec   85             sender
    [ 10]   0.00-10.00  sec  2.88 MBytes  2.41 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  2.30 MBytes  1.93 Mbits/sec   65             sender
    [ 12]   0.00-10.00  sec  2.23 MBytes  1.87 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  2.60 MBytes  2.18 Mbits/sec   74             sender
    [ 14]   0.00-10.00  sec  2.48 MBytes  2.08 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  2.25 MBytes  1.89 Mbits/sec   84             sender
    [ 16]   0.00-10.00  sec  2.13 MBytes  1.78 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  3.00 MBytes  2.51 Mbits/sec   69             sender
    [ 18]   0.00-10.00  sec  2.86 MBytes  2.40 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  3.37 MBytes  2.83 Mbits/sec   60             sender
    [ 20]   0.00-10.00  sec  3.26 MBytes  2.73 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  3.11 MBytes  2.61 Mbits/sec   73             sender
    [ 22]   0.00-10.00  sec  2.97 MBytes  2.49 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec  27.6 MBytes  23.2 Mbits/sec  764             sender
    [SUM]   0.00-10.00  sec  26.4 MBytes  22.1 Mbits/sec                  receiver
    
    

    Nebula (using chachapoly):

    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  2.47 MBytes  2.07 Mbits/sec   31             sender
    [  4]   0.00-10.00  sec  2.35 MBytes  1.97 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  4.07 MBytes  3.42 Mbits/sec   32             sender
    [  6]   0.00-10.00  sec  3.81 MBytes  3.19 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  4.09 MBytes  3.43 Mbits/sec   32             sender
    [  8]   0.00-10.00  sec  3.93 MBytes  3.30 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec  4.33 MBytes  3.63 Mbits/sec   33             sender
    [ 10]   0.00-10.00  sec  4.12 MBytes  3.46 Mbits/sec                  receiver
    [ 12]   0.00-10.00  sec  5.71 MBytes  4.79 Mbits/sec   30             sender
    [ 12]   0.00-10.00  sec  5.54 MBytes  4.64 Mbits/sec                  receiver
    [ 14]   0.00-10.00  sec  4.04 MBytes  3.39 Mbits/sec   74             sender
    [ 14]   0.00-10.00  sec  3.88 MBytes  3.25 Mbits/sec                  receiver
    [ 16]   0.00-10.00  sec  2.87 MBytes  2.41 Mbits/sec   42             sender
    [ 16]   0.00-10.00  sec  2.68 MBytes  2.25 Mbits/sec                  receiver
    [ 18]   0.00-10.00  sec  3.27 MBytes  2.74 Mbits/sec   22             sender
    [ 18]   0.00-10.00  sec  3.09 MBytes  2.59 Mbits/sec                  receiver
    [ 20]   0.00-10.00  sec  4.42 MBytes  3.71 Mbits/sec   66             sender
    [ 20]   0.00-10.00  sec  4.26 MBytes  3.57 Mbits/sec                  receiver
    [ 22]   0.00-10.00  sec  1.98 MBytes  1.66 Mbits/sec   34             sender
    [ 22]   0.00-10.00  sec  1.88 MBytes  1.58 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec  37.3 MBytes  31.2 Mbits/sec  396             sender
    [SUM]   0.00-10.00  sec  35.5 MBytes  29.8 Mbits/sec                  receiver
    
    opened by KazamaSion 15
  • FATA[0001] no such device

    FATA[0001] no such device

    I am run nebula in Raspberry Pi 1 Model B+, sudo ./nebula -config config.yaml But it says FATA[0001] no such device. Is it mean it fail to create tun dev?

    opened by msmarks 12
  • openwrtx64 can't run nebula

    openwrtx64 can't run nebula

    I downloaded the file nebula(linux-amd64.tar.gz)to run,not found errors。 [email protected]:~# ./etc/nebula/nebula -config /etc/nebula/config.yml -ash: ./etc/nebula/nebula: not found [email protected]:~# bash ./etc/nebula/nebula -config /etc/nebula/config.yml bash: ./etc/nebula/nebula: No such file or directory [email protected]:~# /etc/nebula/nebula -config /etc/nebula/config.yml -ash: /etc/nebula/nebula: not found [email protected]:~# bash /etc/nebula/nebula -config /etc/nebula/config.yml /etc/nebula/nebula: /etc/nebula/nebula: cannot execute binary file

    opened by zgcbug 11
  • Question: LXD Bridge Setup

    Question: LXD Bridge Setup

    I have Ubuntu homelab server with LXD enabled. My setup includes:

    • 1 DO lighthouse (nebula IP 192.168.16.1)
    • 1 Macbook (nebula IP 192.168.16.10)
    • 1 physical Ubuntu 18.04 server (nebula IP 192.168.16.2)
    • 1 LXD Ubuntu 18.04 server (LXD container) inside the above physical server (nebula IP 192.168.16.3)

    My physical Ubuntu server has 3 interfaces: enp2s0 (physical, bridge mode),lanbr0 (bridge to enp2s0), and lxdbr0 (bridge for LXD internal network). The physical NIC enp2s0 has been set to bridge mode as following netplan config

    network:
        version: 2
        renderer: networkd
        ethernets:
            enp2s0:
                dhcp4: no
        bridges:
            lanbr0:
                interfaces: [enp2s0]
                macaddress: (same MAC address as the physical enp2s0)
                dhcp4: true
                parameters:
                    stp: false
                    forward-delay: 0
    

    The LXD Ubuntu 18.04 server (LXD container) has 2 interfaces: eth0 (bridged to parent lxdbr0), and eth1 (bridged to parent lanbr0). Since this LXD container is bridged to lanbr0, it is visible to my physical homelab LAN.

    When my MacBook is in the homelab LAN, I can use nebula to access to both physical server and LXD container.

    But when I am in another house (100m away from my homelab, but both connected to the same ISP through fibre cable), my MacBook can ping (and ssh) to the physical server ( 192.168.16.2), but cannot see the lxd container (192.168.16.3).

    Should I need make any change on my LXD in order to access my LXD container when away from my homelab?

    opened by nfam 9
  • lighthouse failed with error=

    lighthouse failed with error="sendto: function not implemented"

    I seem to be seeing this error since 1.2.0 and still having the issue with 1.3.0

    [root ~]# /var/lib/nebula/nebula -config /var/lib/nebula/config.yml
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip: proto:0 startPort:0]"
    INFO[0000] Firewall started                              firewallHash=21716b47a7a140e448077fe66c31b4b42f232e996818d7dd1c6c4991e066dbdb
    INFO[0000] Main HostMap created                          network=a.b.c.10/24 preferredRanges="[]"
    INFO[0000] UDP hole punching enabled       
    ERRO[0000] Failed to get udp listen address              error="function not implemented"
    INFO[0000] Nebula interface is active                    build=1.3.0 interface=nebula1 network=a.b.c.10/24 udpAddr="<nil>"
    ERRO[0000] failed to discover udp listening address      error="function not implemented"
    INFO[0000] Handshake message received                    certName=cbridge.local fingerprint=ad5d6ebd94f7390bbf6fcfc9fbcd34887e2fe5234e5498dd1a803ce3299ae309 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=653308818 remoteIndex=0 responderIndex=0 udpAddr="t.u.v.208:17760" vpnIp=a.b.c.26
    ERRO[0000] Failed to send handshake                      certName=cbridge.local error="sendto: function not implemented" fingerprint=ad5d6ebd94f7390bbf6fcfc9fbcd34887e2fe5234e5498dd1a803ce3299ae309 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=653308818 remoteIndex=0 responderIndex=1539139066 udpAddr="t.u.v.208:17760" vpnIp=a.b.c.26
    INFO[0001] Handshake message received                    certName=pi3-02.local fingerprint=844c4b453ca2d6144d45ca8af0ba2d5ed4b7e788c836d93f56ec059598bd9257 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=2113947748 remoteIndex=0 responderIndex=0 udpAddr="t.u.v.208:1031" vpnIp=a.b.c.28
    ERRO[0001] Failed to send handshake                      certName=pi3-02.local error="sendto: function not implemented" fingerprint=844c4b453ca2d6144d45ca8af0ba2d5ed4b7e788c836d93f56ec059598bd9257 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=2113947748 remoteIndex=0 responderIndex=570473939 udpAddr="t.u.v.208:1031" vpnIp=a.b.c.28
    ...
    ...
    ...
    

    The light house is a digital ocean droplet running CentOS 7.8, with kernel 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

    Firewalld is running, with tcp & udp port 4242 open

    my configuration

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: /var/lib/nebula/ca.crt
      cert: /var/lib/nebula/host.crt
      key: /var/lib/nebula/host.key
      #blacklist is a list of certificate fingerprints that we will refuse to talk to
      #blacklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "a.b.c.10": ["t.u.v.122:4242"]
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: true
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      #serve_dns: false
      #dns:
        # The DNS host defines the IP to bind the dns listener to. This also allows binding to the nebula node IP.
        #host: 0.0.0.0
        #port: 53
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 30
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
    #  hosts:
    #    - "a.b.c.10"
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      host: 0.0.0.0
      port: 4242
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
    punchy: true
    
    # respond means that a node you are trying to reach will connect back out to you if your hole punching fails
    # this is extremely useful if one node is behind a difficult nat, such as a symmetric NAT
    # Default is false
    respond: true
    
    # delays a punch response for misbehaving NATs, default is 1 second, respond must be true to take effect
    delay: 1s
    
    # Cipher allows you to choose between the available ciphers for your network.
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    #cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # Name of the device
      dev: nebula1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
      # Unsafe routes allows you to route traffic over nebula to non-nebula nodes
      # Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
      # NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
      unsafe_routes:
        #- route: 172.16.1.0/24
        #  via: 192.168.100.99
        #  mtu: 1300 #mtu will default to tun mtu if this option is not sepcified
    
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr)
      # - port: Takes `2` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
    opened by admun 9
  • illegal Instruction on MerlinWRT armv7

    illegal Instruction on MerlinWRT armv7

    I have tried to execute the v1.2.0 armv7 binary on MerlinWRT armv7 architecture machine, but got an error message of "Illegal Instruction".

    • ASUS AC-RT68U router running MerlinWRT 384.15

    How should I got it work?

    # part of config
    listen:
      host: 0.0.0.0
      port: 34242
      punch: true
    
      respond: true
    
    cipher: chachapoly
    
    tun:
      dev: nebula1
      drop_local_broadcast: false
      drop_multicast: false
      tx_queue: 500
      mtu: 1300
      routes:
      unsafe_routes:
    
    firewall:
      conntrack:
        tcp_timeout: 120h
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is def
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
        # Allow tcp/443 from any host with BOTH laptop and home group
        - port: 443
          proto: tcp
          groups:
            - laptop
            - home
    
    opened by OceanApart 9
  • consistent handshake error in logs

    consistent handshake error in logs

    Hi and first off apologies if this is the wrong forum to report this, feel free to close if it is. Really like the idea behind Nebula and have setup a network to test it out but have come up with a strange issue.

    I have a lighthouse with a public IP and one node on a VM on my laptop, I am seeing the error below consistently if I run a ping between the two devices. Every time the error is returned the ping time jumps considerably and as it happens every 5 or so responses it's affecting the performance. I can't see anything about the error in my searches online:

    ERRO[0000] Already seen this handshake packet handshake="map[stage:2 style:ix_psk0]" header="ver=1 type=handshake subtype=ix_psk0 reserved=0x0 remoteindex=289222265 messagecounter=2" udpAddr="#.#.#.#:4242" vpnIp=192.168.99.100

    $ ping 192.168.99.100
    PING 192.168.99.100 (192.168.99.100) 56(84) bytes of data.
    64 bytes from 192.168.99.100: icmp_seq=4 ttl=64 time=265 ms
    64 bytes from 192.168.99.100: icmp_seq=5 ttl=64 time=55.0 ms
    64 bytes from 192.168.99.100: icmp_seq=6 ttl=64 time=50.1 ms
    64 bytes from 192.168.99.100: icmp_seq=7 ttl=64 time=48.9 ms
    64 bytes from 192.168.99.100: icmp_seq=10 ttl=64 time=324 ms
    64 bytes from 192.168.99.100: icmp_seq=11 ttl=64 time=57.8 ms
    

    Happy to troubleshoot if there any suggestions. Thank you.

    opened by TechSec-me 9
  • handshake: update to preferred remote

    handshake: update to preferred remote

    If we receive a handshake packet for a tunnel that has already been completed, check to see if the new remote is preferred. If so, update to the preferred remote and send a test packet to influence the other side to do the same.

    opened by wadey 0
  • Lighthouse randomly stops recieving packets

    Lighthouse randomly stops recieving packets

    My lighthouse server, which also run other services, will occasionally lock up and stop receiving packets from anywhere, to both the nebula interface and the regular one. Restarting the nebula service fixes the problem. I don't know what triggering this. I'll try to post more info if it happens again. any debugging suggestions would be appreciated.

    opened by zethra 1
  • Hub and spoke topology

    Hub and spoke topology

    First of all awesome work guys !!

    We have a situation where the customer might not want full mesh connectivity between all the nodes but just the connectivity of each node just to a central hub. The central hub could be the lighthouse itself or a separate node.

    Is there any way to define which connections should be allowed between which nodes.

    Thanks

    opened by chirayu-patel 1
  • ubuntu: can't open config file with nebula snap, but works fine with GitHub binary

    ubuntu: can't open config file with nebula snap, but works fine with GitHub binary

    I installed the nebula client from snap but when I try to run nebula I get:

    % sudo /snap/bin/nebula -config /usr/local/etc/nebula-config.yaml failed to load config: no config files found at /usr/local/etc/nebula-config.yaml%

    when I grab the amd64 tarball from GitHub and run it against the same config file it works just fine. They're both version 1.4.0 as far as I can tell.

    opened by mlindgren80 0
  • Nebula and OPNsense issues

    Nebula and OPNsense issues

    I'm having issues getting OPNsense and nebula to play together nicely

    OPNsense VM itself can ping each nebula host fine, but clients on that OPNsense network can't. I have tried to set a gateway and route manually, to no avail. These routes already seem to be there anyway.

    I also can't get the unsafe_routes thing to work either. The OPNsense cert has the subnet added when I made the cert, and all the other clients have the route configured to go via the OPNsense router's nebula IP. Can't ping anything

    Router nebula config

    # This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
    # Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
    
    # PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
    pki:
      # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
      ca: ca.crt
      cert: cert.pem
      key: key.pem
      #blocklist is a list of certificate fingerprints that we will refuse to talk to
      #blocklist:
      #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
    
    # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
    # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
    # The syntax is:
    #   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
    # Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
    static_host_map:
      "10.255.0.1": ["nebula.swagnet.cf:4242"]
    
    
    lighthouse:
      # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
      # you have configured to be lighthouses in your network
      am_lighthouse: false
      # serve_dns optionally starts a dns listener that responds to various queries and can even be
      # delegated to for resolution
      #serve_dns: false
      #dns:
        # The DNS host defines the IP to bind the dns listener to. This also allows binding to the nebula node IP.
        #host: 0.0.0.0
        #port: 53
      # interval is the number of seconds between updates from this node to a lighthouse.
      # during updates, a node sends information about its current IP addresses to each node.
      interval: 60
      # hosts is a list of lighthouse hosts this node should report to and query from
      # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
      # IMPORTANT2: THIS SHOULD BE LIGHTHOUSES' NEBULA IPs, NOT LIGHTHOUSES' REAL ROUTABLE IPs
      hosts:
        - "10.255.0.1"
    
      # remote_allow_list allows you to control ip ranges that this node will
      # consider when handshaking to another node. By default, any remote IPs are
      # allowed. You can provide CIDRs here with `true` to allow and `false` to
      # deny. The most specific CIDR rule applies to each remote. If all rules are
      # "allow", the default will be "deny", and vice-versa. If both "allow" and
      # "deny" rules are present, then you MUST set a rule for "0.0.0.0/0" as the
      # default.
      #remote_allow_list:
        # Example to block IPs from this subnet from being used for remote IPs.
        #"172.16.0.0/12": false
    
        # A more complicated example, allow public IPs but only private IPs from a specific subnet
        #"0.0.0.0/0": true
        #"10.0.0.0/8": false
        #"10.42.42.0/24": true
    
      # local_allow_list allows you to filter which local IP addresses we advertise
      # to the lighthouses. This uses the same logic as `remote_allow_list`, but
      # additionally, you can specify an `interfaces` map of regular expressions
      # to match against interface names. The regexp must match the entire name.
      # All interface rules must be either true or false (and the default will be
      # the inverse). CIDR rules are matched after interface name rules.
      # Default is all local IP addresses.
      #local_allow_list:
        # Example to block tun0 and all docker interfaces.
        #interfaces:
          #tun0: false
          #'docker.*': false
        # Example to only advertise this subnet to the lighthouse.
        #"10.0.0.0/8": true
    
    # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
    # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
    listen:
      # To listen on both any ipv4 and ipv6 use "[::]"
      host: "[::]"
      port: 4242
      # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
      # default is 64, does not support reload
      #batch: 64
      # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
      # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
      # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
      # max, net.core.rmem_max and net.core.wmem_max
      #read_buffer: 10485760
      #write_buffer: 10485760
    
    # EXPERIMENTAL: This option is currently only supported on linux and may
    # change in future minor releases.
    #
    # Routines is the number of thread pairs to run that consume from the tun and UDP queues.
    # Currently, this defaults to 1 which means we have 1 tun queue reader and 1
    # UDP queue reader. Setting this above one will set IFF_MULTI_QUEUE on the tun
    # device and SO_REUSEPORT on the UDP socket to allow multiple queues.
    #routines: 1
    
    punchy:
      # Continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
      punch: true
    
      # respond means that a node you are trying to reach will connect back out to you if your hole punching fails
      # this is extremely useful if one node is behind a difficult nat, such as a symmetric NAT
      # Default is false
      respond: true
    
      # delays a punch response for misbehaving NATs, default is 1 second, respond must be true to take effect
      #delay: 1s
    
    # Cipher allows you to choose between the available ciphers for your network. Options are chachapoly or aes
    # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
    cipher: chachapoly
    
    # Local range is used to define a hint about the local network range, which speeds up discovering the fastest
    # path to a network adjacent nebula node.
    #local_range: "172.16.0.0/24"
    
    # sshd can expose informational and administrative functions via ssh this is a
    #sshd:
      # Toggles the feature
      #enabled: true
      # Host and port to listen on, port 22 is not allowed for your safety
      #listen: 127.0.0.1:2222
      # A file containing the ssh host private key to use
      # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
      #host_key: ./ssh_host_ed25519_key
      # A file containing a list of authorized public keys
      #authorized_users:
        #- user: steeeeve
          # keys can be an array of strings or single string
          #keys:
            #- "ssh public key string"
    
    # Configure the private interface. Note: addr is baked into the nebula certificate
    tun:
      # When tun is disabled, a lighthouse can be started without a local tun interface (and therefore without root)
      disabled: false
      # Name of the device
      dev: tun1
      # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
      drop_local_broadcast: false
      # Toggles forwarding of multicast packets
      drop_multicast: false
      # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
      tx_queue: 500
      # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
      mtu: 1300
      # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
      routes:
        #- mtu: 8800
        #  route: 10.0.0.0/16
      # Unsafe routes allows you to route traffic over nebula to non-nebula nodes
      # Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
      # NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
      #unsafe_routes:
      #  - route: 10.0.0.0/24
          via: 10.255.0.2
        #- route: 172.16.1.0/24
        #  via: 192.168.100.99
        #  mtu: 1300 #mtu will default to tun mtu if this option is not sepcified
    
    
    # TODO
    # Configure logging level
    logging:
      # panic, fatal, error, warning, info, or debug. Default is info
      level: info
      # json or text formats currently available. Default is text
      format: text
      # Disable timestamp logging. useful when output is redirected to logging system that already adds timestamps. Default is false
      #disable_timestamp: true
      # timestamp format is specified in Go time format, see:
      #     https://golang.org/pkg/time/#pkg-constants
      # default when `format: json`: "2006-01-02T15:04:05Z07:00" (RFC3339)
      # default when `format: text`:
      #     when TTY attached: seconds since beginning of execution
      #     otherwise: "2006-01-02T15:04:05Z07:00" (RFC3339)
      # As an example, to log as RFC3339 with millisecond precision, set to:
      #timestamp_format: "2006-01-02T15:04:05.000Z07:00"
    
    #stats:
      #type: graphite
      #prefix: nebula
      #protocol: tcp
      #host: 127.0.0.1:9999
      #interval: 10s
    
      #type: prometheus
      #listen: 127.0.0.1:8080
      #path: /metrics
      #namespace: prometheusns
      #subsystem: nebula
      #interval: 10s
    
      # enables counter metrics for meta packets
      #   e.g.: `messages.tx.handshake`
      # NOTE: `message.{tx,rx}.recv_error` is always emitted
      #message_metrics: false
    
      # enables detailed counter metrics for lighthouse packets
      #   e.g.: `lighthouse.rx.HostQuery`
      #lighthouse_metrics: false
    
    # Handshake Manger Settings
    #handshakes:
      # Handshakes are sent to all known addresses at each interval with a linear backoff,
      # Wait try_interval after the 1st attempt, 2 * try_interval after the 2nd, etc, until the handshake is older than timeout
      # A 100ms interval with the default 10 retries will give a handshake 5.5 seconds to resolve before timing out
      #try_interval: 100ms
      #retries: 20
      # trigger_buffer is the size of the buffer channel for quickly sending handshakes
      # after receiving the response for lighthouse queries
      #trigger_buffer: 64
    
    
    # Nebula security group configuration
    firewall:
      conntrack:
        tcp_timeout: 12m
        udp_timeout: 3m
        default_timeout: 10m
        max_connections: 100000
    
      # The firewall is default deny. There is no way to write a deny rule.
      # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
      # Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr)
      # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
      #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
      #   proto: `any`, `tcp`, `udp`, or `icmp`
      #   host: `any` or a literal hostname, ie `test-host`
      #   group: `any` or a literal group name, ie `default-group`
      #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
      #   cidr: a CIDR, `0.0.0.0/0` is any.
      #   ca_name: An issuing CA name
      #   ca_sha: An issuing CA shasum
    
      outbound:
        # Allow all outbound traffic from this node
        - port: any
          proto: any
          host: any
    
      inbound:
        # Allow icmp between any nebula hosts
        - port: any
          proto: any
          host: any
    
    opened by paz 0
  • Adding FQDN as option

    Adding FQDN as option

    while noting the caveats of restart required to update DNS

    opened by silversword411 1
  • Explain how groups work in the readme

    Explain how groups work in the readme

    The 'groups' feature and how to create network segregation is not explained anywhere as far as I have seen. The documentation really needs to say that they are created using optional parameters when creating certs, and giving a worked example that explains how to set up a network with segmentation would be very helpful.

    opened by robehickman 0
  • add a note about sub help pages to the root '-h'

    add a note about sub help pages to the root '-h'

    When nebula-cert is called with '-h' to display help, it does not show any information about which optional parameters all of the sub-modes (ca, sign etc) have. I discovered that I can list them by running 'nebula-cert ca -h' for example, but this was not obvious to me. It would be clearer if a note like "to see parameter for different options, please run 'nebula-cert [option] -h'" to the root help msg.

    opened by robehickman 0
  • Use system routing instead of unsafe_routes?

    Use system routing instead of unsafe_routes?

    Hey I was wondering, is it possible to route using system routing eg. ip route add or other system equivalents? It would be helpful for more complex routing eg. policy routing over different exit nodes. Thanks in advance.

    opened by kaplan-michael 1
  • Cannot connect to Lighhouse when client WAN IP changes until cache expires (if ever)

    Cannot connect to Lighhouse when client WAN IP changes until cache expires (if ever)

    First off I would like to thank you for opening this tool/service with the world as opensource software.

    I tried searching for an issue that was similar in nature, but unsure if #512 qualifies.

    I have a nagging issue which causes nebula to disconnect and not reconnect/recover for over 5 minutes. I believe the issue comes from a stale external IP cache, but I can't tell where it's stored as restarting Lighthouse (nebula_v1.4.0_linux_amd64) and client (nebula_v1.4.0_linux_arm64) doesn't resolve the issue. Eventually everything sorts itself out, but in a painfully slow timeframe.

    Lighhouse: vendor_id : GenuineIntel cpu family : 6 model : 61 model name : Intel Core Processor (Broadwell, no TSX, IBRS)

    Client: Processor : AArch64 Processor rev 4 (aarch64) Hardware : sun50iw1p1

    Please find the logs below from the Lighhouse node:

    Aug 24 17:02:52 cdn nebula[21623]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:52 cdn nebula[21623]: time="2021-08-24T17:02:52Z" level=info msg="Taking new handshake" certName={filtered} vpnIp=192.168.3.2 Aug 24 17:02:52 cdn nebula[21623]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message sent" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:2 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=1226101364 sentCachedPackets=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:52 cdn nebula[21623]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:52 cdn nebula[21623]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:53 cdn nebula[21623]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:53 cdn nebula[21623]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:53 cdn nebula[21623]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:53 cdn nebula[21623]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:54 cdn nebula[21623]: time="2021-08-24T17:02:54Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:54 cdn nebula[21623]: time="2021-08-24T17:02:54Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:55 cdn nebula[21623]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:55 cdn nebula[21623]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:55 cdn nebula[21623]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:55 cdn nebula[21623]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:57 cdn nebula[21623]: time="2021-08-24T17:02:57Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:57 cdn nebula[21623]: time="2021-08-24T17:02:57Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:58 cdn nebula[21623]: time="2021-08-24T17:02:58Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:58 cdn nebula[21623]: time="2021-08-24T17:02:58Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:59 cdn nebula[21623]: time="2021-08-24T17:02:59Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:02:59 cdn nebula[21623]: time="2021-08-24T17:02:59Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Taking new handshake" certName={filtered} vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:2 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=2616167467 sentCachedPackets=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:01 cdn nebula[21623]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:02 cdn nebula[21623]: time="2021-08-24T17:03:02Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:02 cdn nebula[21623]: time="2021-08-24T17:03:02Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:03 cdn nebula[21623]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:03 cdn nebula[21623]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:03 cdn nebula[21623]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:03 cdn nebula[21623]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:04 cdn nebula[21623]: time="2021-08-24T17:03:04Z" level=info msg="Handshake message received" certName={filtered} fingerprint=9a8af4edd523c19c7c46d8fe3b1bad102f08d6c6e5b1c97e9ed76ab4f18b5bff handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 remoteIndex=0 responderIndex=0 udpAddr="{filtered}:13695" vpnIp=192.168.3.2 Aug 24 17:03:04 cdn nebula[21623]: time="2021-08-24T17:03:04Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="{filtered}:13695" vpnIp=192.168.3.2

    Logs on the client Aug 24 17:02:52 homester nebula[12637]: time="2021-08-24T17:02:52Z" level=info msg="Handshake timed out" durationNs=8379105950 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3972176064 remoteIndex=0 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:52 homester nebula[12637]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:52 homester nebula[12637]: time="2021-08-24T17:02:52Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:53 homester nebula[12637]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:53 homester nebula[12637]: time="2021-08-24T17:02:53Z" level=info msg="Handshake timed out" durationNs=8579430949 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3580733626 remoteIndex=0 udpAddrs="[]" vpnIp=192.168.3.3 Aug 24 17:02:53 homester nebula[12637]: time="2021-08-24T17:02:53Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:54 homester nebula[12637]: time="2021-08-24T17:02:54Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:55 homester nebula[12637]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:55 homester nebula[12637]: time="2021-08-24T17:02:55Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:57 homester nebula[12637]: time="2021-08-24T17:02:57Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:58 homester nebula[12637]: time="2021-08-24T17:02:58Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:02:59 homester nebula[12637]: time="2021-08-24T17:02:59Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:01 homester nebula[12637]: time="2021-08-24T17:03:01Z" level=info msg="Handshake timed out" durationNs=8495165069 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=901163749 remoteIndex=0 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:01 homester nebula[12637]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:01 homester nebula[12637]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:01 homester nebula[12637]: time="2021-08-24T17:03:01Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:02 homester nebula[12637]: time="2021-08-24T17:03:02Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:02 homester nebula[12637]: time="2021-08-24T17:03:02Z" level=info msg="Handshake timed out" durationNs=8763083880 handshake="map[stage:1 style:ix_psk0]" initiatorIndex=4048442694 remoteIndex=0 udpAddrs="[]" vpnIp=192.168.3.3 Aug 24 17:03:03 homester nebula[12637]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1 Aug 24 17:03:03 homester nebula[12637]: time="2021-08-24T17:03:03Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1064900936 udpAddrs="[{filtered}:13695]" vpnIp=192.168.3.1

    opened by borrelan 5
Releases(v1.4.0)
Owner
Slack
On a mission to make your working life simpler, more pleasant and more productive.
Slack
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Cossack Labs 1.4k Sep 19, 2021
Agent-less vulnerability scanner for Linux, FreeBSD, Container, WordPress, Programming language libraries, Network devices

Vuls: VULnerability Scanner Vulnerability scanner for Linux/FreeBSD, agent-less, written in Go. We have a slack team. Join slack team Twitter: @vuls_e

Future Corp 8.7k Sep 24, 2021
Cossack Labs 800 Sep 17, 2021
A collection of cool tools used by Mobile hackers. Happy hacking , Happy bug-hunting

A collection of cool tools used by Mobile hackers. Happy hacking , Happy bug-hunting Family project Table of Contents Weapons Contribute Thanks to con

HAHWUL 255 Sep 9, 2021
gosec - Golang Security Checker

Inspects source code for security problems by scanning the Go AST.

Secure Go 5.4k Sep 24, 2021
Pokes users on Slack about outstanding risks found by Crowdstrike Spotlight or vmware Workspace ONE so they can secure their own endpoint.

?? security-slacker Pokes users on Slack about outstanding risks found by Crowdstrike Spotlight or vmware Workspace ONE so they can secure their own e

Niels Hofmans 13 Sep 3, 2021
Nuclei is a fast tool for configurable targeted vulnerability scanning based on templates offering massive extensibility and ease of use.

Fast and customisable vulnerability scanner based on simple YAML based DSL. How • Install • For Security Engineers • For Developers • Documentation •

ProjectDiscovery 5.2k Sep 22, 2021
Sqreen's Application Security Management for the Go language

Sqreen's Application Security Management for Go After performance monitoring (APM), error and log monitoring it’s time to add a security component int

Sqreen 146 Sep 3, 2021
PHP security vulnerabilities checker

Local PHP Security Checker The Local PHP Security Checker is a command line tool that checks if your PHP application depends on PHP packages with know

Fabien Potencier 703 Sep 23, 2021
HTTP middleware for Go that facilitates some quick security wins.

Secure Secure is an HTTP middleware for Go that facilitates some quick security wins. It's a standard net/http Handler, and can be used with many fram

Cory Jacobsen 1.8k Sep 19, 2021
DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)

Minify and Secure Docker containers (free and open source!) Don't change anything in your Docker container image and minify it by up to 30x making it

docker-slim 10.7k Sep 23, 2021
🔑 A decentralized key derivation protocol for simple passphrase.

Throttled Identity Protocol (TIP) is a decentralized key derivation protocol, which allows people to obtain a strong secret key through a very simple passphrase, e.g. a six-digit PIN.

Mixin Network 25 Sep 17, 2021
A framework for creating COM-based bypasses utilizing vulnerabilities in Microsoft's WDAPT sensors.

Dent More Information If you want to learn more about the techniques utlized in this framework please take a look at this article. Description This fr

Optiv Security 258 Sep 16, 2021
Ah shhgit! Find secrets in your code. Secrets detection for your GitHub, GitLab and Bitbucket repositories: www.shhgit.com

shhgit helps secure forward-thinking development, operations, and security teams by finding secrets across their code before it leads to a security br

Paul 3.3k Sep 23, 2021
A fast port scanner written in go with a focus on reliability and simplicity. Designed to be used in combination with other tools for attack surface discovery in bug bounties and pentests

Naabu is a port scanning tool written in Go that allows you to enumerate valid ports for hosts in a fast and reliable manner. It is a really simple to

ProjectDiscovery 1.5k Sep 24, 2021
Tracee: Linux Runtime Security and Forensics using eBPF

Tracee is a Runtime Security and forensics tool for Linux. It is using Linux eBPF technology to trace your system and applications at runtime, and analyze collected events to detect suspicious behavioral patterns.

Aqua Security 1.3k Sep 26, 2021
A tool for secrets management, encryption as a service, and privileged access management

Vault Please note: We take Vault's security and our users' trust very seriously. If you believe you have found a security issue in Vault, please respo

HashiCorp 21.8k Sep 21, 2021
CVE-2021-3449 OpenSSL denial-of-service exploit 👨🏻‍💻

CVE-2021-3449 OpenSSL <1.1.1k DoS exploit Usage: go run . -host hostname:port This program implements a proof-of-concept exploit of CVE-2021-3449 affe

Richard Patel 212 Sep 22, 2021