Skip to main content
Version: v0.7.1

Spread scheduling with overrides

Overview

Nova supports spreading a group of workloads onto multiple clusters. This means that the whole group can be cloned and run in the multiple workload clusters. User also has an option to override fields in the managed objects per cluster, using .spec.spreadConstraints.overrides field in SchedulePolicy. In this tutorial we will learn how to do this for Namespace kind of object, but it can be done for any kind which can be scheduled by Nova. This is useful for cases when we want to ensure that the given workload runs in a set of cluster, but we want to tweak its configuration in each workload cluster.

In this tutorial we will deploy istio-system namespace via Nova to two workload cluster, and we will override a value of topology.istio.io/network label in each cluster.

Let's start by listing our workload clusters connected to the Nova Control Plane:

kubectl --context=nova get clusters
NAME              K8S-VERSION   K8S-CLUSTER   REGION   ZONE   READY   IDLE    STANDBY
kind-workload-1 1.25 workload-1 True False False
kind-workload-2 1.25 workload-2 True True False

If your workload clusters are named differently, please open examples/sample-spread-scheduling/policy-override.yaml and change every occurrence of kind-workload-1 & kind-workload-2 in this file to your workload clusters names.

We want to ensure that this namespace:

apiVersion: v1
kind: Namespace
metadata:
name: istio-system
labels:
release: istio

will be present in each workload cluster with a topology.istio.io/network label but with different value in each cluster.

Let's take a look at the SchedulePolicy manifest we will use in this tutorial:

apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: spread-namespace-policy
spec:
namespaceSelector: {}
resourceSelectors:
labelSelectors:
- matchLabels:
release: istio # this makes sure we will match our istio-system namspace, because it has release=istio label
groupBy:
labelKey: release
clusterSelector: # this cluster selector selects two workload clusters: kind-workload-1 & kind-workload-2
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- kind-workload-1 # change it to your workload cluster name
- kind-workload-2 # change it to your workload cluster name
spreadConstraints:
spreadMode: Duplicate # this means that each cluster selected in clusterSelector will get a duplicate of the object(s) matched by this policy
# our topologyKey is kubernetes.io/metadata.name label, which will have unique value for each workload cluster.
# This means that each workload cluster will get a duplicated of matched object(s).
topologyKey: kubernetes.io/metadata.name
# here we specify overrides per cluster
overrides:
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=kind-workload-1),
# we will apply following workloads overrides:
- topologyValue: kind-workload-1 # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "west-network" value
staticValue: west-network
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=kind-workload-2),
# we will apply following workloads overrides:
- topologyValue: kind-workload-2 # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "central-network" value
staticValue: central-network

Now, we can create a SchedulePolicy and istio-system namespace in the Nova Control Plane:

kubectl --context=nova create -f examples/sample-spread-scheduling/policy-override.yaml
kubectl --context=nova create -f examples/sample-spread-scheduling/namespace-override.yaml

We expect to see istio-system namespace being present in both workload clusters:

  • in the first workload cluster (in my case kind-workload-1, it's also a name of the kube config context) istio-system namespace should have west-network label value.
  • in the second workload cluster (in my case kind-workload-2, it's also a name of the kube config context) istio-system namespace should have central-network label value.

If we have kube contexts available for these two workload clusters, we can check it with kubectl.

kubectl --context=kind-workload-1 wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=west-network  namespace istio-system --timeout=90s
kubectl --context=kind-workload-2 wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=central-network  namespace istio-system --timeout=90s

Overrides were applied correctly!

note

Number of replicas for Deployments, ReplicaSets, StatefulSets and paralellism for Jobs cannot be overridden using overrides. Please use .spec.spreadConstraints.percentageSplit instead.

Cleanup

To delete all resources created in this tutorial, delete the namespace in Nova Control Plane:

kubectl --context=nova delete -f examples/sample-spread-scheduling/namespace-override.yaml

and then delete schedule policy:

kubectl --context=nova delete -f examples/sample-spread-scheduling/policy-override.yaml