Skip to main content
Version: v1.3

Fill-and-Spill Scheduling

Let's look at a detailed example of using a Fill and Spill Schedule policy to place workloads across two workload clusters connected to the Nova control plane.

Setup

We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

If you installed Nova using the tarball; export this additional environment variables. You can optionally replace the value k8s-cluster-hosting-cp with the context name of your Nova hosting cluster.

export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp

Alternatively, if you installed Nova using setup scripts provided in the try-nova repository, please export these environment variables:

export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2

Environment variable names with prefix NOVA_ refer to the custom resource Cluster in Nova. Cluster context names with prefix K8S_ refer to the underlying Kubernetes clusters.

Let's begin by checking the workload clusters connected to Nova using kubectl:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME    K8S-VERSION   K8S-CLUSTER   REGION        ZONE            READY   IDLE   STANDBY
wlc-1 1.32 nova-wlc-1 us-central1 us-central1-f True True False
wlc-2 1.32 nova-wlc-2 us-central1 us-central1-c True True False

Fill and Spill across specific clusters

Let's look at placing a sample workload using a fill-and-spill Schedule policy.

Step 1: Create the fill and spill schedule policy

Edit the policy manifest file to replace these variables ${NOVA_WORKLOAD_CLUSTER_2} and ${NOVA_WORKLOAD_CLUSTER_1} with the names of the workload clusters in your setup.

  orderedClusterSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- ${NOVA_WORKLOAD_CLUSTER_2}
- ${NOVA_WORKLOAD_CLUSTER_1}

Create the policy:

envsubst < "examples/sample-fill-and-spill/grp-policy.yaml" > "./grp-policy.yaml"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./grp-policy.yaml
schedulepolicy.policy.elotl.co/fill-and-spill-policy-1 created

Check that the policy was created successfully:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get schedulepolicy
NAME                      AGE
fill-and-spill-policy-1 9s

Step 2: Create the first sample workload

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-fill-and-spill/nginx-group-1.yaml
deployment.apps/nginx-group-1 created

Ensure that the workload got created in the CP:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get deploy        
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
nginx-group-1 0/1 0 0 4s

Step 3: Check workload placement

We can check that the deployment was placed as expected on the higher priority workload cluster (${K8S_CLUSTER_CONTEXT_2}) in the orderedClusterList provided in the Schedule Policy. We use the kube context of this workload cluster to check that the deployment is running successfully:

kubectl --context=${K8S_CLUSTER_CONTEXT_2} get deploy
kubectl --context=${K8S_CLUSTER_CONTEXT_2} get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-group-1 1/1 1 1 12s

Step 4: Create the second workload

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-fill-and-spill/nginx-group-2.yaml
deployment.apps/nginx-group-2 created

Step 5: Check workload placement

The second deployment requests 3 cpus and the higher priority cluster (${K8S_CLUSTER_CONTEXT_2}) has only 1 CPU remaining. So we see that this deployment gets placed on the second target cluster (${K8S_CLUSTER_CONTEXT_1}) within the orderedClusterSelector.

kubectl --context=${K8S_CLUSTER_CONTEXT_1} get deploy
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
nginx-group-2 1/1 1 1 35s