El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

Overview

El Carro: The Oracle Operator for Kubernetes

Go Report Card

Run Oracle on Kubernetes with El Carro

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

High Level Overview

El Carro helps you with the deployment and management of Oracle database software in Kubernetes. You must have appropriate licensing rights to allow you to use it with El Carro (BYOL).

With the current release, you download the El Carro installation bundle, stage the Oracle installation software, create a containerized database image (with or without a seed database), and then create an Instance (known as CDB in Oracle parlance) and add one or more Databases (known as PDBs).

After the El Carro Instance and Database(s) are created, you can take snapshot-based or RMAN-based backups and get basic monitoring and logging information. Additional database services will be added in future releases.

License Notice

You can use El Carro to automatically provision and manage Oracle Database Express Edition (XE) or Oracle Database Enterprise Edition (EE). In each case, it is your responsibility to ensure that you have appropriate licenses to use any such Oracle software with El Carro.

Please also note that each El Carro “database” will create a pluggable database, which may require licensing of the Oracle Multitenant option.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Quickstart

We recommend starting with the quickstart first, but as you become more familiar with El Carro, consider trying more advanced features by following the user guides linked below.

If you have a valid license for Oracle 12c EE and would like to get your Oracle database up and running on Kubernetes, you can follow this quickstart guide.

As an alternative to Oracle 12c EE, you can use Oracle 18c XE which is free to use by following the quickstart guide for Oracle 18c XE instead.

If you prefer to run El Carro locally on your personal computer, you can follow the user guide for Oracle on minikube.

Preparation

To prepare the El Carro download and deployment, follow this guide.

Provisioning

El Carro helps you to easily create, scale, and delete Oracle databases.

Firstly, you need to create a containerized database image.

You can optionally create a default Config to set namespace-wide defaults for configuring your databases, following this guide.

Then you can create Instances (known as CDBs in Oracle parlance), following this guide. Afterward, create Databases (known as PDBs) and users following this guide.

Backup and Recovery

El Carro provides both storage snapshot based backup/restore and Oracle native RMAN based backup/restore features to support your database backup and recovery strategy.

After the El Carro Instance and Database(s) are created, you can create storage snapshot based backups, following this guide.

You can also create Oracle native RMAN based backups, following this guide.

To restore from a backup, follow this guide.

Data Import & Export

El Carro provides data import/export features based on Oracle Data Pump.

To import data to your El Carro database, follow this guide.

To export data from your El Carro database, follow this guide.

What's More?

There are more features supported by El Carro and more to be added soon! For more information, check logging, monitoring, connectivity, UI, etc.

Contributing

You're very welcome to contribute to the El Carro Project!

We've put together a set of contributing and development guidelines that you can review in this guide.

Support

To report a bug or log a feature request, please open a GitHub issue and follow the guidelines for submitting a bug.

For general questions or community support, we welcome you to join the El Carro community mailing list and ask your question there.

Issues
  • Unable to build operator docker image. stat oracle/pkg/database/common: file does not exist

    Unable to build operator docker image. stat oracle/pkg/database/common: file does not exist

    Describe the bug Unable to build operator docker image locally

    To Reproduce

    cd $PATH_TO_EL_CARRO_REPO
    {
    export REPO="localhost:5000/oracle.db.anthosapis.com"
    export TAG="latest"
    export OPERATOR_IMG="${REPO}/operator:${TAG}"
    docker build -f oracle/Dockerfile -t ${OPERATOR_IMG} .
    docker push ${OPERATOR_IMG}
    }
    
    Sending build context to Docker daemon   4.71MB
    Step 1/19 : FROM docker.io/golang:1.15 as builder
     ---> 40349a2425ef
    Step 2/19 : WORKDIR /build
     ---> Using cache
     ---> b44c2a87f722
    Step 3/19 : COPY go.mod go.mod
     ---> Using cache
     ---> c359cdfe04b9
    Step 4/19 : COPY go.sum go.sum
     ---> Using cache
     ---> 6f6d2902ef22
    Step 5/19 : RUN go mod download
     ---> Using cache
     ---> 8be558325755
    Step 6/19 : COPY common common
     ---> Using cache
     ---> 1dd64c7bfbc5
    Step 7/19 : COPY oracle/main.go oracle/main.go
     ---> Using cache
     ---> 0a79c9d91f73
    Step 8/19 : COPY oracle/version.go oracle/version.go
     ---> Using cache
     ---> a9fbca9b14cf
    Step 9/19 : COPY oracle/api/ oracle/api/
     ---> Using cache
     ---> 123c5e7c856e
    Step 10/19 : COPY oracle/controllers/ oracle/controllers/
     ---> Using cache
     ---> 7c7a1ff96c61
    Step 11/19 : COPY oracle/pkg/agents oracle/pkg/agents
     ---> Using cache
     ---> 9d5ed5ea3f52
    Step 12/19 : COPY oracle/pkg/database/common oracle/pkg/database/common
    COPY failed: file not found in build context or excluded by .dockerignore: stat oracle/pkg/database/common: file does not exist
    

    Expected behavior docker build finishes successfully

    opened by urbanchef 19
  • Fix bug in testhelpers k8sUpdateWithRetryHelper

    Fix bug in testhelpers k8sUpdateWithRetryHelper

    In k8sUpdateWithRetryHelper after updating the object we make an extra Get to make sure the object has changed. This causes problems in scenarios where the object gets deleted after the update (e.g., by another controller).

    For example in my test I'm using this helper to update an object and remove it's finalizer. After removing the finalizer the object is removed and so the test fails on line envtest.go:1102 when it tries to fetch the object again to compare resourceVersions.

    I made a small change to handle this case.

    size/XS lgtm ok-to-test approved 
    opened by ha-D 7
  • [Backup Schedule] Move CronAnything from oracle to common.

    [Backup Schedule] Move CronAnything from oracle to common.

    As described in go/anthos-postgres-backup-schedule-dd, we should have a general framework for Backup shedule. Oracle adopts CronAnything to do the cron backup job. Our current design is to reuse the work in oracle, so the first step is to move the CronAnything from oracle/ to common/.

    Bug: b/193256355

    lgtm cla: yes size/XXL ok-to-test approved 
    opened by shuhanfan 6
  • Add VolumeName in DiskSpec

    Add VolumeName in DiskSpec

    To enhance 1:1 binding between static provisioned PV and PVC

    Bug: 196033991 Doc: go/ods-postgres-static-pv

    Change-Id: I4df7a517f10cc9c1a05c78c5ae4a1cce59d0de38

    size/S lgtm cla: yes ok-to-test approved 
    opened by angelawan-jiawan 5
  • Instructions and config to run in AWS EKS

    Instructions and config to run in AWS EKS

    I have tested all steps described here in this blog post: https://blog.pythian.com/using-el-carro-operator-on-aws/ Did minor changes from that post to create these instructions, making it easier to follow and using fewer pre-created files.

    size/XL lgtm cla: yes ok-to-test approved 
    opened by ncalero-uy 5
  • Refactor restore logic into a separate state machine

    Refactor restore logic into a separate state machine

    • Move everything restore-related to instance_controller_restore.go
    • Simplify code flow and readability
    • Add 2 extra statuses RestorePreparationInProgress / RestorePreparationComplete. This helps keep track of old STS/PVC removal.
    • Minor tweaks to functional tests (delete LRO might be called more than once, this is expected)
    • This should fix bug/flake '[pvc] is being deleted'
    size/XL 
    opened by kchernyshev 5
  • Fix tag on UI image

    Fix tag on UI image

    The upstream UI image gcr.io/elcarro/oracle.db.anthosapis.com/ui does not have a latest tag:

    $ curl -s https://gcr.io/v2/elcarro/oracle.db.anthosapis.com/ui/tags/list | jq '.tags'
    [
      "v0.0.0-alpha",
      "v0.1.0-alpha"
    ]
    

    It results in an error when installing the UI as per https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/monitoring/ui.md

    Failed to pull image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": rpc error: code = NotFound desc = failed to pull and unpack image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": failed to resolve reference "gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest": gcr.io/elcarro/oracle.db.anthosapis.com/ui:latest: not found
    

    Following the convention in operator.yaml, here we change the tag to v0.1.0-alpha. With this change , the UI installs successfully:

    $ kubectl describe pod -n ui | tail
     Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
     Events:
       Type    Reason     Age   From               Message
       ----    ------     ----  ----               -------
       Normal  Scheduled  25s   default-scheduler  Successfully assigned ui/ui-659665f8cb-5wlrz to 4633612-svr004
       Normal  Pulling    24s   kubelet            Pulling image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:v0.1.0-alpha"
       Normal  Pulled     23s   kubelet            Successfully pulled image "gcr.io/elcarro/oracle.db.anthosapis.com/ui:v0.1.0-alpha" in 1.341448841s
       Normal  Created    23s   kubelet            Created container ui
       Normal  Started    23s   kubelet            Started container ui
    
    size/XS lgtm approved 
    opened by mfielding 4
  • Refactor the backup controller.

    Refactor the backup controller.

    • Add a logger parameter so that it shares the same configuration as the rest of the reconciler code.
    • Clarify the purpose of the empty case block.
    • Extract the LRO backup in progress status checking part out, this is equivalent to backupInProgress function (renamed to snapshotBackupInProgress)

    Change-Id: Ice6081c7779cb5912d197729c5fe611491ea3e99

    refactoring size/M lgtm cla: yes ok-to-test approved 
    opened by ankel 4
  • Increase operator memory limits

    Increase operator memory limits

    When testing on bare metal environments, I'm seeing restarts due to out-of-memory conditions. For example:

    manager invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), order=0, oom_score_adj=999
    CPU: 40 PID: 1476973 Comm: manager Kdump: loaded Tainted: G          I      --------- -  - 4.18.0-305.25.1.el8_4.x86_64 #1
    ...
    memory: usage 40960kB, limit 40960kB, failcnt 318
    Memory cgroup stats for /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4ac3b_fed7_45d6_8ba9_b81361ddffc5.slice/cri-containerd-39a63070dceb191a7c2cd62666b29a747eacf7f6d10f8f5df05aea57470dcbd8.scope:
    anon 38109184#012file 0#012kernel_stack 184320#012slab 0#012percpu 0#012sock 0#012shmem 0#012file_mapped 0#012file_dirty 0#012file_writeback 0#012anon_thp 0#012inactive_anon 38576128#012active_anon 0#012inactive_file 0#012active_file 0#012unevictable 0#012slab_reclaimable 0#012slab_unreclaimable 0#012pgfault 11220#012pgmajfault 0#012workingset_refault_anon 0#012workingset_refault_file 0#012workingset_activate_anon 0#012workingset_activate_file 0#012workingset_restore_anon 0#012workingset_restore_file 0#012workingset_nodereclaim 0#012pgrefill 0#012pgscan 0#012pgsteal 0#012pgactivate 0#012pgdeactivate 0#012pglazyfree 0#012pglazyfreed 0#012thp_fault_alloc 0#012thp_collapse_alloc 0
    Tasks state (memory values in pages):
    [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
    [1473780] 65532 1473780   981102    16623   614400        0           999 manager
    oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=cri-containerd-39a63070dceb191a7c2cd62666b29a747eacf7f6d10f8f5df05aea57470dcbd8.scope,mems_allowed=0-1,oom_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4ac3b_fed7_45d6_8ba9_b81361ddffc5.slice/cri-containerd-39a63070dceb191a7c2cd62666b29a747eacf7f6d10f8f5df05aea57470dcbd8.scope,task_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4ac3b_fed7_45d6_8ba9_b81361ddffc5.slice/cri-containerd-39a63070dceb191a7c2cd62666b29a747eacf7f6d10f8f5df05aea57470dcbd8.scope,task=manager,pid=1473780,uid=65532
    

    Upping the limit from 40Mi to 60Mi, the errors have gone away for me.

    Change-Id: I63684199d6ce2a5c59cf97547c4edd56f8c8e4ff

    size/XS lgtm approved 
    opened by mfielding 3
  • Fix method of obtaining service account token

    Fix method of obtaining service account token

    The .secrets field within service accounts is only for listing secrets to be mounted into pods, not for extracting secrets for other uses. There is no guarantee the first item in the list will be a token secret.

    In 1.24+, this field is not populated by default.

    Adapt instructions from https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token to obtain a token

    cc @zshihang

    size/S lgtm needs-ok-to-test approved 
    opened by liggitt 3
  • Changes to turn off (along with turning on) dNFS state.

    Changes to turn off (along with turning on) dNFS state.

    Summary: Until now we only allowed the enablement of dNFS using the enableDnfs flag. This change allows the turn_off (along with the turn_on) of the dnfs feature.

    Change-Id: Ibc00823760ad5cafe326bc00ee102808bdc8d1b8

    size/L approved 
    opened by aravindous 3
  • Anchor regex for CDBName

    Anchor regex for CDBName

    In PR #228 we added validation for the CDBName instance parameter. However the regex was missing line start and end anchors (^ and $), meaning that it would match partial subscrings. So a CDBName like 1DB would have been accepted when it should have errored out.

    Adding the anchors here, and also removing the backquotes for consistency with other patterns, and since none of these characters need quoting in YaML.

    I took the liberty to create testscases with a few cdbName values:

    $ for file in *.yaml; do grep -H cdbName $file && kubectl apply -f $file; done
    instance-8char.yaml:  cdbName: A2345678
    instance.oracle.db.anthosapis.com/mydb configured
    instance-a.yaml:  cdbName: A
    instance.oracle.db.anthosapis.com/mydb configured
    instance-db1.yaml:  cdbName: DB1
    instance.oracle.db.anthosapis.com/mydb configured
    instance-lowercase.yaml:  cdbName: lowercase
    The Instance "mydb" is invalid: spec.cdbName: Invalid value:
    "lowercase": spec.cdbName in body should be at most 8 chars long
    instance-number.yaml:  cdbName: 1DB
    The Instance "mydb" is invalid: spec.cdbName: Invalid value: "1DB":
    spec.cdbName in body should match '^[A-Z][A-Z0-9]*$'
    instance-symbols.yaml:  cdbName: DB#1
    The Instance "mydb" is invalid: spec.cdbName: Invalid value: "DB#1":
    spec.cdbName in body should match '^[A-Z][A-Z0-9]*$'
    instance-toolong.yaml:  cdbName: OVER8CHARS
    The Instance "mydb" is invalid: spec.cdbName: Invalid value:
    "OVER8CHARS": spec.cdbName in body should be at most 8 chars long
    
    size/XS lgtm approved 
    opened by mfielding 1
  • Confusing about platfrom dependency

    Confusing about platfrom dependency

    Describe the bug

    The true Kubernetes way is to avoid any dependencies on the underlying platform and allow the application to run the same way on any K8s cluster installation. Also, the nature of the app itself doesn't matter should it be stateful or stateless.

    Nowadays, the cloud provider's code is moved out of the tree in the separated external cloud provider's repos. Kubernetes allows dealing with abstract things like Ingress, StorageClass, LoadBalancer services, etc. Such things hide the underlying platform from the applications and allow them to run on any Kubernetes installation, be it Minikube, GCP, or any other way to run k8s. The logic to provision such abstract things is delegated to the appropriate cloud provider binary. This binary knows better what to do to provide a service.

    As I see, in the v0.1.0-alpha version there is a small amount of platform depended code and in most cases it is useless and it is a good time to redesign the Elcarro operator to make it more interesting for the community.

    Expected behavior Cloud agnostic way to run ELCarro Oracle Operator.

    Additional context Cloud Providers in Kubernetes

    opened by ITD27M01 1
  • Plugins support for Import/Export resources

    Plugins support for Import/Export resources

    Is your feature request related to a problem? Please describe. Currently only GCP Cloud storage is supported for import and export of datapumps:

    https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/d2aed2814023cc8b672c727b605165d8fdccfffd/oracle/api/v1alpha1/import_types.go#L41

    As a result such features are unusable in Enterprise environments with restricted networks and strict rules for data location.

    Describe the solution you'd like I suggest to change the "generic" Import/Export resources to use something like http/https urls and add ability to "load" plugins for different storages on different cloud providers or protocols (nfs, cifs or any other crazy things)

    opened by ITD27M01 3
Releases(v0.2.0-alpha)
  • v0.2.0-alpha(Aug 11, 2022)

    This release brings two new exciting features to enhance El Carro and make it easier to manage your Oracle deployments: Improved Data Migration and Point-in-Time Recovery (PITR). Take a look at our blog post to understand how these features can add value to your organization. Additionally, this release provides numerous bug fixes. If you have any questions or comment, feel free to reach out via our Google group or by filing a GitHub issue.

    Source code(tar.gz)
    Source code(zip)
    release-artifacts.tar.gz(330.00 KB)
  • v0.1.0-alpha(Sep 3, 2021)

    This release provides the latest and greatest tools for running Oracle Databases on Kubernetes. Oracle 19c EE (Enterprise Edition) is fully supported by this release. Additionally, El Carro gives you more flexibility by supporting 4 different options for sourcing Oracle Database container images as described in our blog post. This release also brings numerous bug fixes and improvements. In addition to GKE, EKS, and Minikube, El Carro now supports Kind.

    Source code(tar.gz)
    Source code(zip)
    release-artifacts.tar.gz(41.39 KB)
  • v0.0.0-alpha(May 13, 2021)

Owner
Google Cloud Platform
Google Cloud Platform
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Kubernetes-based Event Driven Autoscaling KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KED

KEDA 5.3k Aug 11, 2022
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Chen Keinan 31 Jul 27, 2022
KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

KubeOrbit is an open-source abstraction layer library that turns easy apps testing&debuging on Kubernetes in a new way

TeamCode 443 Aug 8, 2022
Injective-price-oracle-ext - Injective's Oracle with dynamic price feeds (for External Integrations)

injective-price-oracle Injective's Oracle with dynamic price feeds. Allows anyon

Injective Protocol 4 Mar 1, 2022
kube-champ 39 Aug 6, 2022
Edge Orchestration project is to implement distributed computing between Docker Container enabled devices.

Edge Orchestration Introduction The main purpose of Edge Orchestration project is to implement distributed computing between Docker Container enabled

null 1 Dec 17, 2021
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration, and automating updates to configuration when there is new code to deploy.

Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.

Flux project 3.7k Aug 18, 2022
Hexa is the open-source, standards-based policy orchestration software for multi-cloud and hybrid businesses.

Hexa Policy Orchestrator Hexa is the open-source, standards-based policy orchestration software for multi-cloud and hybrid businesses. The Hexa projec

Hexa Policy Orchestration 37 Aug 3, 2022
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

null 2.4k Aug 11, 2022
TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative.

TriggerMesh open source event-driven integration platform powered by Kubernetes and Knative. TriggerMesh allows you to declaratively define event flows between sources and targets as well as add even filter, splitting and processing using functions.

TriggerMesh 324 Aug 8, 2022
Kubernetes OS Server - Kubernetes Extension API server exposing OS configuration like sysctl via Kubernetes API

KOSS is a Extension API Server which exposes OS properties and functionality using Kubernetes API, so it can be accessed using e.g. kubectl. At the moment this is highly experimental and only managing sysctl is supported. To make things actually usable, you must run KOSS binary as root on the machine you will be managing.

Mateusz Gozdek 3 May 19, 2021
Kubernetes workload controller for container image deployment

kube-image-deployer kube-image-deployer는 Docker Registry의 Image:Tag를 감시하는 Kubernetes Controller입니다. Keel과 유사하지만 단일 태그만 감시하며 더 간결하게 동작합니다. Container, I

PUBG Corporation 2 Mar 8, 2022
Ansible-driven CI/CD and monitoring system

Ansible Semaphore Follow Semaphore on Twitter (AnsibleSem) and StackShare (ansible-semaphore). Ansible Semaphore is a modern UI for Ansible. It lets y

Ansiphore 0 Jun 28, 2022
Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

The Moby Project Moby is an open-source project created by Docker to enable and accelerate software containerization. It provides a "Lego set" of tool

Moby 63.8k Aug 17, 2022
A Simple and Comprehensive Vulnerability Scanner for Container Images, Git Repositories and Filesystems. Suitable for CI

A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI. Table of Contents Abstract Features Installation

Aqua Security 13.5k Aug 17, 2022
Reconstruct Open API Specifications from real-time workload traffic seamlessly

Reconstruct Open API Specifications from real-time workload traffic seamlessly: Capture all API traffic in an existing environment using a service-mes

null 295 Aug 3, 2022
Application open new tab in chrome when your favourite youtuber add new video.

youtube-opener This application open new tab in Chrome when your favourite youtuber add new video. It checks channel every one minute. How to run go r

null 1 Jan 16, 2022
Source code and slides for Kubernetes Community Days - Bangalore.

kcdctl This is the source code for the demo done as part of the talk "Imperative, Declarative and Kubernetes" at the Kubernetes Community Days, Bengal

Madhav Jivrajani 15 Sep 19, 2021