What happened?
I created a bucket by applying the following manifest:
apiVersion: influxdb.crossplane.io/v1alpha1
kind: Bucket
metadata:
name: example-bucket2
spec:
forProvider:
description: example-bucket2
orgID: SOME_ORG_ID
retentionRules:
- everySeconds: 0
type: expire
providerConfigRef:
name: default
The bucket got created and was visible on the InfluxDB Cloud 2.0 UI.
Although the resource never got to the state Ready: True
. See below:
$ kubectl get buckets.influxdb.crossplane.io
NAME READY SYNCED EXTERNAL-NAME AGE
example-bucket2 True example-bucket2 5m15s
Then I tried to delete with:
$ kubectl delete buckets.influxdb.crossplane.io example-bucket2
bucket.influxdb.crossplane.io "example-bucket2" deleted
Then the crossplane-provider-influxdb
pod got into CrashLoopBackOff
NAME READY STATUS RESTARTS AGE
pod/crossplane-78bf69f4cc-f2zmz 1/1 Running 0 27h
pod/crossplane-provider-azure-ddc1abaec4d0-665f9bb6c8-mplm8 1/1 Running 1 27h
pod/crossplane-provider-influxdb-ba2ab2715129-8567c78666-pz57p 0/1 CrashLoopBackOff 50 3h59m
pod/crossplane-provider-jet-azure-de7fdc8d3f04-84d4874476-dstzb 1/1 Running 0 27h
pod/crossplane-provider-sql-f6f5be714a28-545b559dcb-m8kw4 1/1 Running 0 27h
pod/crossplane-provider-tf-azure-529a2cefbfc7-5ffdf4f86b-8pdcl 1/1 Running 0 27h
pod/crossplane-rbac-manager-86996b55b4-dn57k 1/1 Running 1 27h
pod/upbound-bootstrapper-65cbb55d7c-f8z54 1/1 Running 0 27h
pod/xgql-9bbf5d9d7-4f75j 1/1 Running 0 27h
The following log output was seen on the pod mentioned above:
I0112 14:01:23.925955 1 request.go:668] Waited for 1.011662735s due to client-side throttling, not priority and fairness, request: GET:https://192.168.0.1:443/apis/postgresql.azure.tf.crossplane.io/v1alpha1?timeout=32s
panic: runtime error: index out of range [0] with length 0
goroutine 444 [running]:
github.com/crossplane-contrib/provider-influxdb/internal/controller/bucket.LateInitialize(0xc0001b2140, 0xc0000101e0, 0x1)
/home/runner/work/provider-influxdb/provider-influxdb/internal/controller/bucket/conversions.go:129 +0x35f
github.com/crossplane-contrib/provider-influxdb/internal/controller/bucket.(*external).Observe(0xc0008440b0, 0x1c4cdc0, 0xc000319920, 0x1c6f690, 0xc0001b2000, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/runner/work/provider-influxdb/provider-influxdb/internal/controller/bucket/controller.go:98 +0x476
github.com/crossplane/crossplane-runtime/pkg/reconciler/managed.(*Reconciler).Reconcile(0xc0006c44d0, 0x1c4cdf8, 0xc00092e360, 0x0, 0x0, 0xc0009826e0, 0xf, 0xc00092e300, 0x0, 0x0, ...)
/home/runner/work/provider-influxdb/provider-influxdb/vendor/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed/reconciler.go:679 +0x22ab
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0003234a0, 0x1c4cd50, 0xc000429540, 0x1879340, 0xc00091b340)
/home/runner/work/provider-influxdb/provider-influxdb/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:298 +0x30d
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0003234a0, 0x1c4cd50, 0xc000429540, 0x663a226465636100)
/home/runner/work/provider-influxdb/provider-influxdb/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2(0xc0003b2010, 0xc0003234a0, 0x1c4cd50, 0xc000429540)
/home/runner/work/provider-influxdb/provider-influxdb/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:214 +0x6b
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
/home/runner/work/provider-influxdb/provider-influxdb/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:210 +0x425
How can we reproduce it?
The steps are described in the what happened?
section.
What environment did it happen in?
Crossplane version: upbound/crossplane:v1.5.1-up.1
Provider InfluxDB Cloud version: v0.1.1
Azure AKS
Kubernetes 1.20.13
bug