Skip to main content
Version: v1.1

Fill-and-Spill Scheduling Tutorial

Let's look at a detailed example of using a Fill and Spill Schedule policy to place workloads across two workload clusters connected to the Nova control plane.

Setup

We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

If you installed Nova using the tarball; export these additional environment variables:

export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

Alternatively, if you installed Nova using setup scripts provided in the try-nova repository, please export these environment variables:

export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2

Let's begin by checking the workload clusters connected to Nova using kubectl:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters --show-labels
NAME              K8S-VERSION   K8S-CLUSTER   REGION      ZONE         READY   IDLE   STANDBY   LABELS
wlc-1 1.29 sel-wlc-1 us-west-2 us-west-2d True True False kubernetes.io/metadata.name=kind-workload-1,nova.elotl.co/cluster.novacreated=false,nova.elotl.co/cluster.provider=aws,nova.elotl.co/cluster.region=us-west-2,nova.elotl.co/cluster.version=1.29,nova.elotl.co/cluster.zone=us-west-2d
wlc-2 1.29 sel-wlc-2 us-west-2 us-west-2d True True False kubernetes.io/metadata.name=kind-workload-2,nova.elotl.co/cluster.novacreated=false,nova.elotl.co/cluster.provider=aws,nova.elotl.co/cluster.region=us-west-2,nova.elotl.co/cluster.version=1.29,nova.elotl.co/cluster.zone=us-west-2d

Fill and Spill across specific clusters

Let's look at placing a sample workload using a fill-and-spill Schedule policy.

Step 1: Create the fill and spill schedule policy

Edit the policy manifest file to replace these variables ${NOVA_WORKLOAD_CLUSTER_2} and ${NOVA_WORKLOAD_CLUSTER_1} with the names of the workload clusters in your setup.

  orderedClusterSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- ${NOVA_WORKLOAD_CLUSTER_2}
- ${NOVA_WORKLOAD_CLUSTER_1}

Create the policy:

% kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-fill-and-spill/grp-policy.yaml   
schedulepolicy.policy.elotl.co/fill-and-spill-policy-1 created

Check that the policy was created successfully:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get schedulepolicy
NAME                      AGE
fill-and-spill-policy-1 9s

Step 2: Create the first sample workload

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-fill-and-spill/nginx-group-1.yaml
deployment.apps/nginx-group-1 created

Ensure that the workload got created in the CP:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get deploy        
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-group-1 0/1 0 0 4s

Step 3: Check workload placement

We can check that the deployment was placed as expected on the higher priority workload cluster (${K8S_CLUSTER_CONTEXT_2}) in the orderedClusterList provided in the Schedule Policy. We use the kube context of this workload cluster to check that the deployment is running successfully:

kubectl --context=${K8S_CLUSTER_CONTEXT_2} get deploy
kubectl --context=${K8S_CLUSTER_CONTEXT_2} get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-group-1 1/1 1 1 12s

Step 4: Create the second workload

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-fill-and-spill/nginx-group-2.yaml
deployment.apps/nginx-group-2 created

Step 5: Check workload placement

The second deployment requests 3 cpus and the higher priority cluster (${K8S_CLUSTER_CONTEXT_2}) has only 1 CPU remaining. So we see that this deployment gets placed on the second target cluster (${K8S_CLUSTER_CONTEXT_1}) within the OrderedClusterSelector.

kubectl --context=${K8S_CLUSTER_CONTEXT_1} get deploy
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
nginx-group-2 1/1 1 1 35s