Skip to main content
Version: v1.1

Spread scheduling with overrides

Overview

Nova supports spreading a group of workloads onto multiple clusters. This means that the whole group can be cloned and run on the multiple workload clusters. The user also has an option to override fields in the managed objects per cluster, using the .spec.spreadConstraints.overrides field in SchedulePolicy. In this tutorial we will learn how to do this for a Namespace kind of object, but it can be done for any kind which can be scheduled by Nova. This is useful for cases when we want to ensure that the given workload runs on a set of clusters, but we want to tweak its configuration in each workload cluster.

In this tutorial we will deploy the istio-system namespace via Nova to two workload clusters, and we will override the value of the topology.istio.io/network label in each cluster.

We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

Export these additional environment variables if you installed Nova using the tarball.

export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2

Alternatively export these environment variables if you installed Nova using setup scripts provided in the try-nova repository.

export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2

Let's start by listing our workload clusters connected to the Nova Control Plane:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME    K8S-VERSION   K8S-CLUSTER   REGION   ZONE   READY   IDLE   STANDBY
wlc-1 1.28 wlc-1 True True False
wlc-2 1.28 wlc-2 True True False

We want to ensure that this namespace:

apiVersion: v1
kind: Namespace
metadata:
name: istio-system
labels:
release: istio

will be present in each workload cluster with a topology.istio.io/network label but with different value in each cluster.

Let's take a look at the SchedulePolicy manifest we will use in this tutorial:

apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: spread-namespace-policy
spec:
namespaceSelector: {}
resourceSelectors:
labelSelectors:
- matchLabels:
release: istio # this makes sure we will match our istio-system namspace, because it has release=istio label
groupBy:
labelKey: release
clusterSelector: # this cluster selector selects two workload clusters: wlc-1 & wlc-2
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- ${NOVA_WORKLOAD_CLUSTER_1} # change it to your workload cluster name
- ${NOVA_WORKLOAD_CLUSTER_2} # change it to your workload cluster name
spreadConstraints:
spreadMode: Duplicate # this means that each cluster selected in clusterSelector will get a duplicate of the object(s) matched by this policy
# our topologyKey is kubernetes.io/metadata.name label, which will have unique value for each workload cluster.
# This means that each workload cluster will get a duplicated of matched object(s).
topologyKey: kubernetes.io/metadata.name
# here we specify overrides per cluster
overrides:
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=wlc-1),
# we will apply following workloads overrides:
- topologyValue: ${NOVA_WORKLOAD_CLUSTER_1} # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "west-network" value
staticValue: west-network
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=wlc-2),
# we will apply following workloads overrides:
- topologyValue: ${NOVA_WORKLOAD_CLUSTER_2} # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "central-network" value
staticValue: central-network

Now, we can create a SchedulePolicy and istio-system namespace in the Nova Control Plane:

envsubst < "examples/sample-spread-scheduling/policy-override.yaml" > "./policy-override.yaml"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f ./policy-override.yaml
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/sample-spread-scheduling/namespace-override.yaml

We expect to see istio-system namespace being present in both workload clusters:

  • in the first workload cluster (in my case wlc-1, it's also a name of the kube config context) istio-system namespace should have west-network label value.
  • in the second workload cluster (in my case wlc-2, it's also a name of the kube config context) istio-system namespace should have central-network label value.

If we have kube contexts available for these two workload clusters, we can check it with kubectl.

kubectl --context=${K8S_CLUSTER_CONTEXT_1} wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=west-network  namespace istio-system --timeout=90s
kubectl --context=${K8S_CLUSTER_CONTEXT_2} wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=central-network  namespace istio-system --timeout=90s

Overrides were applied correctly!

note

Number of replicas for Deployments, ReplicaSets, StatefulSets and paralellism for Jobs cannot be overridden using overrides. Please use .spec.spreadConstraints.percentageSplit instead.

Cleanup

To delete all resources created in this tutorial, delete the namespace in Nova Control Plane:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/sample-spread-scheduling/namespace-override.yaml

and then delete schedule policy:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f ./policy-override.yaml
rm -f ./policy-override.yaml