The official container networking plugin for both OECP of Alibaba Cloud and SOFAStack of Ant Financial Co.

Related tags

DevOps Tools rama
Overview

Rama

What is Rama?

Rama is an open source container networking solution, integrated with Kubernetes and used officially by following well-known PaaS platforms,

  • OECP of Alibaba Cloud
  • SOFAStack of Ant Financial Co.

Rama focus on large-scale, user-friendly and heterogeneous infrastructure, now hundreds of clusters are running on rama all over world.

Features

  • Flexible network models: three-level, Network, Subnet and IPInstance, all implemented in CRD
  • DualStack: three modes optional, IPv4Only, IPv6Only and DualStack
  • Hybrid network fabric: support overlay and underlay pods at same time
  • Advanced IPAM: Network/Subnet/IPInstance assignment; stateful workloads IP retain
  • Kube-proxy friendly: working well with iptables-mode kube-proxy
  • ARM support: run on x86_64 and arm64 architectures

Contributing

Rama welcome contributions, including bug reports, feature requests and documentation improvements. If you want to contribute, please start with CONTRIBUTING.md

Contact

For any questions about rama, please reach us via:

  • Slack: #general on the rama slack
  • DingTalk: Group No.35109308
  • E-mail: private or security issues should be reported via e-mail addresses listed in the MAINTAINERS file

License

Apache 2.0 License

Issues
  • bugfix: iptables compatibility for nftables-based hosts

    bugfix: iptables compatibility for nftables-based hosts

    Pull Request Description

    Describe what this PR does / why we need it

    iptables tools is confronted with nftables compatibility problem on host, eg. CentOS 8.

    kube-proxy and cilium has solved this problem, so should we.

    Does this pull request fix one issue?

    Fixes #29

    Describe how you did it

    COPY and RUN iptables-wrapper-install.sh in Dockerfile

    The shell script comes from: https://github.com/kubernetes-sigs/iptables-wrappers/

    Describe how to verify it

    Pull image. on k8s cluster on both iptables-legacy based hosts and iptables-nft based hosts.

    Run sudo iptables-save | grep RAMA, then you should see related entries on both types of hosts.

    Special notes for reviews

    opened by SwimilTylers 7
  • change Kube-ovn to Kube-OVN and fix grammar issues

    change Kube-ovn to Kube-OVN and fix grammar issues

    Pull Request Description

    Describe what this PR does / why we need it

    Change Kube-ovn to Kube-OVN and fix grammar issues

    Does this pull request fix one issue?

    No

    Describe how you did it

    No

    Describe how to verify it

    Doc change

    Special notes for reviews

    Also make some clarification that Kube-OVN has the default VPC and Subnet that can be used by workloads in all namespaces. Even if users have no idea about the VPC and the Subnet, the default settings will make all things work just like Flannel.

    opened by oilbeater 5
  • iptables not correctly configured on CentOS 8 host

    iptables not correctly configured on CentOS 8 host

    Bug Report

    Type: bug report

    What happened

    An compatibility problem might it be.

    I set up a k8s cluster with CentOS 8 (which links iptables to nftables) and runs pods on Overlay and Underlay networks. I observed that on the host machines, the cmd lsmod | grep ip_tables shows that ip_tables is used by iptable_nat, iptable_mangle, and iptable_filter.

    After checking the logs of daemon pods, I believe the iptables rule is written without error. But on host machine, no rama-related iptables rules shown through either iptables-save or nft rule list.

    To be mentioned, iptables-save warns that there are more rules on iptables-legacy.

    What you expected to happen

    I should observe rama-related rules in iptables-save.

    How to reproduce it (as minimally and precisely as possible)

    Set up a k8s cluster CentOS 8 nodes. Install rama and run several Overlay/Underlay pods.

    Anything else we need to know?

    CentOS 8 removes iptables from packages and links it to nftables.

    Kube-proxy works perfectly.

    Environment

    • rama version: v1
    • OS (e.g. cat /etc/os-release): CentOS 8
    • Kernel (e.g. uname -a): Linux 4.18.0-305.7.1.el8_4.x86_6
    • Kubernetes version: v1.21.
    enhancement 
    opened by SwimilTylers 5
  • enhanced address is being used from node to local pods unexpectedlly

    enhanced address is being used from node to local pods unexpectedlly

    Bug Report

    Type: bug report

    What happened

    image

    image

    Enhanced address is being used, when we want to ping local pods from node. This will cause the ICMP reply will never come back.

    What you expected to happen

    How to reproduce it (as minimally and precisely as possible)

    Anything else we need to know?

    Environment

    • hybridnet version: 3.2.0
    • OS (e.g. cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Kubernetes version:
    • Install tools:
    • Others:
    opened by mars1024 2
  • update comment on subnet's spec.netID

    update comment on subnet's spec.netID

    Eliminate possible misunderstanding.

    Pull Request Description

    Describe what this PR does / why we need it

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

    opened by HaiyangDING 2
  • charts: allow replica assignment for deployments

    charts: allow replica assignment for deployments

    Signed-off-by: Bruce Ma [email protected]

    Pull Request Description

    Describe what this PR does / why we need it

    Allow replica assignment for deployments, this maybe useful when user has only one master node.

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

    opened by mars1024 2
  • webhook: reject excludedIPs change in subnet

    webhook: reject excludedIPs change in subnet

    Signed-off-by: Bruce Ma [email protected]

    Pull Request Description

    Describe what this PR does / why we need it

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

    opened by mars1024 2
  • feature: introduce felix for networkpolicy

    feature: introduce felix for networkpolicy

    Pull Request Description

    Describe what this PR does / why we need it

    Introduce felix (v3.20.2) for NetworkPolicy.

    Does this pull request fix one issue?

    NONE

    Describe how you did it

    Reference to terway, and the biggest difference is changing the name of veth pair.

    Describe how to verify it

    Apply the k8s e2e tests by ./sonobuoy run --e2e-focus="\[Feature:NetworkPolicy\]" --e2e-skip="" --image-pull-policy IfNotPresent.

    Special notes for reviews

    opened by hhyasdf 2
  • allow specified subnets without networks

    allow specified subnets without networks

    Pull Request Description

    Describe what this PR does / why we need it

    If subnet specified, then network should be validated and auto-specified.

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

    opened by mars1024 2
  • IPInstance not released if pod is Completed or Evicted

    IPInstance not released if pod is Completed or Evicted

    Bug Report

    Type: bug report

    What happened

    If a pod is evicted or is a completed job pod, it's ipinstance will not be released until pod is deleted manually. Maybe is better to release such ipinstances it by hybridnet.

    What you expected to happen

    IPInstance can be released if a pod is "Completed" or "Evicted".

    How to reproduce it (as minimally and precisely as possible)

    Run a Job or make a pod evicted can produce it.

    Anything else we need to know?

    Environment

    • hybridnet version: v0.2.1
    • OS (e.g. cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Kubernetes version:
    • Install tools:
    • Others:
    opened by hhyasdf 2
  • taking higher priority on specified-network annotation/label on selecting network for pod

    taking higher priority on specified-network annotation/label on selecting network for pod

    Signed-off-by: Bruce Ma [email protected]

    Pull Request Description

    Describe what this PR does / why we need it

    When pod has a specified-network annotation which direct to a overlay network but has no network-type annotation, then IPAM controller will pick a underlay network which will cause conflicts with the overlay network above.

    Does this pull request fix one issue?

    NONE

    Describe how you did it

    Pick network for pod based on specified-network annotation/label firstly.

    Describe how to verify it

    NONE

    Special notes for reviews

    NONE

    opened by mars1024 2
  • support VM IP retain for kubevirt

    support VM IP retain for kubevirt

    Pull Request Description

    Describe what this PR does / why we need it

    Support IP retain for Kubevirt VM

    Does this pull request fix one issue?

    Describe how you did it

    Describe how to verify it

    Special notes for reviews

    opened by hhyasdf 0
  • Support IP multicast

    Support IP multicast

    Issue Description

    Type: feature request

    Describe what feature you want

    1. Support IP multicast in overlay network
    2. Support IP multicast between underlay network pod and underlying network

    Additional context

    opened by hhyasdf 0
  • add a

    add a "can-reach" method to choose host NIC

    Issue Description

    Type: feature request

    Describe what feature you want

    Add "can-reach" parameters for daemon to choose host NIC, just like calico.

    Additional context

    opened by hhyasdf 0
  • Remove DualStack feature gate and make it a build-in behavior

    Remove DualStack feature gate and make it a build-in behavior

    Issue Description

    Type: feature request

    Describe what feature you want

    Now, to deploy a DualStack hybridnet cluster, we need to open the DualStack feature gate by manager/webhook parameters.

    But it seems now that "an ipv4-only cluster with closed DualStack feature gate" and "a dual-stack cluster (DualStack feature gate is opened) with only ipv4 subnets" are not different at all. Maybe it's time to remove the DualStack feature gate and make it a build-in feature.

    Additional context

    enhancement 
    opened by hhyasdf 2
Releases(helm-chart-0.3.1)
  • helm-chart-0.3.1(Jun 16, 2022)

  • v0.5.1(Jun 15, 2022)

  • helm-chart-0.3.0(May 31, 2022)

  • v0.5.0(May 27, 2022)

    New features

    • Change IPInstance APIs and optimize IP allocation performance of manager
    • Introduce GlobalBGP type Network

    Improvements

    • Bump controller-runtime from v0.8.3 to v0.9.7
    Source code(tar.gz)
    Source code(zip)
  • helm-chart-0.2.2(May 27, 2022)

  • v0.3.4(May 17, 2022)

  • v0.4.4(May 11, 2022)

    Fixed Issues

    • Fix the error that nodes get "empty" quota while the Underlay Network still have available addresses to allocate
    • Fix daemon policy container exit with ip6tables-legacy-save error
    Source code(tar.gz)
    Source code(zip)
  • helm-chart-0.2.1(May 9, 2022)

  • v0.4.3(Apr 26, 2022)

    Improvements

    • Introduce flag enable-vlan-arp-enhancement for daemon to enable/disable enhanced addresses
    • Introduce DEFAULT_IP_FAMILY environment variable on dual-stack mode
    • Skip webhook validation on host-networking pods
    • Introduce vtep-address-cidrs flag for daemon to help select vtep address

    Fixed Issues

    • Fix daemon policy container init error on ipv6-only node
    • Node annotation changed should trigger the reconcile of daemon Node controller
    • Fix "to overlay subnet route table 40000 is used by others" error of daemon. It happens if an ipv6 subnet with excluded ip ranges is created
    • Fix daemon update dual-stack IPInstance status error
    • Fix the error that arp enhanced addresses will be taken as source IP address by mistake
    • Fix the error that deprecated bgp rules and routes are not cleaned
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Apr 25, 2022)

  • helm-chart-0.2.0(Apr 12, 2022)

  • helm-chart-0.1.2(Mar 22, 2022)

  • helm-chart-0.1.1(Mar 18, 2022)

  • v0.4.2(Mar 17, 2022)

  • v0.4.1(Mar 11, 2022)

    Improvements

    • Adjust client QPS and Burst configuration of manager
    • Mute useless logs of manager

    Fixed Issues

    • Fix "file exists" error while creating pod
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 5, 2022)

    New features

    • Support BGP mode for an Underlay type Network
    • Support specifying namespace with network/subnet/network-type/ip-family
    • Introduce Felix for NetworkPolicy

    Improvements

    • Refactor daemon/manager/webhook with controller-runtime
    • Deny the creation of /32 or /128 Subnets in webhook
    • Only IPv4 feature valid if DualStack feature-gate is false
    • Specify subnet without a specified Network
    • Gateway field becomes optional for VXLAN/BGP Subnets

    Fixed Issues

    • Fix specifying subnets error for DualStack pod
    • Fix updating failure of nodes' vxlan fdb for a new node
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Nov 30, 2021)

    Improvements

    • Short-circuit terminating pods before enqueuing in manager controller

    Fixed Issues

    • Fix ipv6 address range calculation error
    • Fix nil point dereference error while creating a vlan interface
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Nov 16, 2021)

    Improvements

    • Detect OS parameters for disabling IPv6-related operations
    • Disallow unexpected CIDR notation in APIs

    Fixed Issues

    • Avoid permanent exit of arp proxy on large-scale clusters
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Nov 11, 2021)

    New features

    • Support multicluster feature, which can connect the network between the two clusters (pod ip only)

    Improvements

    • Recycle IP instances for Completed or Evicted pods
    • Use controller-gen to generate crd init yaml file

    Fixed Issues

    • Fix masquerade error sometimes overlay pod access to underlay pod
    • Fix high CPU cost of hybridnet daemon in large scale cluster
    • Fix wrong underlay pod scheduling if not all the nodes belong to an underlay network while an overlay network exists
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Sep 14, 2021)

  • v0.2.0(Sep 1, 2021)

    New features

    • Change project name to "hybridnet", which is completely forward-compatible

    Improvements

    • Network type will be auto selected while pod has a specified network

    Fixed Issues

    • Fix wrong masquerading for remote pod to access local pod (update daemon image and rebuild pod will take effect)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Aug 5, 2021)

  • v0.1.1(Jul 30, 2021)

    Improvements

    • Add checks for pod using the same subnet with node
    • Support setting linux kernel neigh gc thresh parameters
    • Only choose vtep and node ip as node internal overlay container networking ip, support extra selection
    • Remove duplicated routes
    • Adapt to underlay physical environment with arp sender ip check
    • Add prechecking for check pod network configuration, if not ready, pod will not be created successfully

    Fixed Issues

    • Fix error data path for overlay pod to access underlay gateway and excluded ip addresses
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 1, 2021)

    New features

    • Support overlay (vxlan) network
    • Support hybrid overlay/underlay container network
    • Full support for ipv4/ipv6 dual-stack

    Improvements

    • Node need only one physical nic if container network is in the same vlan with node network
    • Non-zero-netId subnet and zero-netId subnet can be on the same node
    • Webhook configuration can be managed by an independent yaml
    • Use default-ip-retain global flag and ip-retain pod annotation to reallocate/retain IP

    Fixed Issues

    • Remove overlay logs for underlay-only mode
    • Fix error of using prefer interfaces list
    • Fix timeout error of pod creation on large scale
    Source code(tar.gz)
    Source code(zip)
provide api for cloud service like aliyun, aws, google cloud, tencent cloud, huawei cloud and so on

cloud-fitter 云适配 Communicate with public and private clouds conveniently by a set of apis. 用一套接口,便捷地访问各类公有云和私有云 对接计划 内部筹备中,后续开放,有需求欢迎联系。 开发者社区 开发者社区文档

null 23 May 8, 2022
Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers.

Cloud-Z Cloud-Z gathers information and perform benchmarks on cloud instances in multiple cloud providers. Cloud type, instance id, and type CPU infor

CloudSnorkel 16 Jun 8, 2022
GitHub Rate Limits Prometheus exporter. Works with both App and PAT credentials

Github Rate Limit Prometheus Exporter A prometheus exporter which scrapes GitHub API for the rate limits used by PAT/GitHub App. Helm Chart with value

Kostiantyn Kulbachnyi 4 Apr 19, 2022
oci-ccm custom build for both arm64 and amd64

OCI Cloud Controller Manager (CCM) oci-cloud-controller-manager is a Kubernetes Cloud Controller Manager implementation (or out-of-tree cloud-provider

Manasseh Zhou 0 Jan 18, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Moby 63.3k Jun 23, 2022
Boxygen is a container as code framework that allows you to build container images from code

Boxygen is a container as code framework that allows you to build container images from code, allowing integration of container image builds into other tooling such as servers or CLI tooling.

nitric 5 Dec 13, 2021
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

null 0 Dec 28, 2021
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Secure Edge Networking Based On Kubernetes And KubeEdge.

What is FabEdge FabEdge is an open source edge networking solution based on kubernetes and kubeedge. It solves the problems including complex network

FabEdge 365 Jun 15, 2022
ip-masq-agent-v2 aims to solve more specific networking cases, allow for more configuration options, and improve observability compared to the original.

ip-masq-agent-v2 Based on the original ip-masq-agent, v2 aims to solve more specific networking cases, allow for more configuration options, and impro

Microsoft Azure 3 Jun 1, 2022
MTLS - Golang mTLS example,mTLS using TLS do both side authentication & authorization

mTLS Golang Example mTLS Golang Example 1. What is mutual TLS (mTLS)? 2. How doe

Hao Chen 99 Jun 20, 2022
MTLS - Golang mTLS example,mTLS using TLS do both side authentication & authorization

mTLS Golang Example mTLS Golang Example 1. What is mutual TLS (mTLS)? 2. How doe

Hao Chen 26 Jan 1, 2022
Azure Kubernetes Service (AKS) advanced networking (CNI) address space calculator.

aksip Azure Kubernetes Service (AKS) advanced networking (CNI) address space calculator. Download Download the the latest version from the releases pa

Carlos Mendible 7 May 30, 2022
Kubectl Locality Plugin - A plugin to get the locality of pods

Kubectl Locality Plugin - A plugin to get the locality of pods

John Howard 6 Nov 18, 2021
The GCP Enterprise Cloud Cost Optimiser, or gecco for short, helps teams optimise their cloud project costs.

gecco helps teams optimise their cloud resource costs. Locate abandoned, idle, and inefficiently configured resources quickly. gecco helps teams build

aeihr. 2 Jan 9, 2022
An open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developersAn open-source, distributed, cloud-native CD (Continuous Delivery) product designed for developers

Developer-oriented Continuous Delivery Product ⁣ English | 简体中文 Table of Contents Zadig Table of Contents What is Zadig Quick start How to use? How to

null 0 Oct 19, 2021
Cloud-gaming-operator - The one that manages VMs for cloud gaming built on GCE

cloud-gaming-operator GCE上に建てたクラウドゲーミング用のVMを管理するやつ 事前準備 GCEのインスタンスかマシンイメージを作成してお

Naoki Kishi 1 Jan 22, 2022
Cloud-on-k8s- - Elastic Cloud on Kubernetes (ECK)

Elastic Cloud on Kubernetes (ECK) Elastic Cloud on Kubernetes automates the depl

null 1 Jan 29, 2022
Official Golang implementation of the Thinkium node

Go Thinkium Official Golang implementation of the Thinkium node. Building the source mkdir build docker run --rm -w /go/src/github.com/ThinkiumGroup/g

Thinkium 30 Apr 6, 2022