Spread scheduling with overrides
Overview
Nova supports spreading a group of workloads onto multiple clusters. This allows a set of Kubernetes resources to be cloned and run on the multiple workload clusters. This feature can be extended to override certain fields in the Kubernetes manifests per cluster, using the .spec.spreadConstraints.overrides
field in SchedulePolicy. In this tutorial, we describe using this feature for an example resource - a Namespace object. Note that this feature can be used for any Kubernetes kind that can be scheduled by Nova. It is useful for cases when we want to ensure that a given workload runs on a set (or subset) of workload clusters, but we want to tweak its configuration in each workload cluster.
We will deploy the istio-system
namespace via Nova to two workload clusters, and we will override the value of the field, topology.istio.io/network
label in each cluster.
We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.
export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2
Export this additional environment variable if you installed Nova using the tarball. You can optionally replace the value k8s-cluster-hosting-cp
with the context name of your Nova hosting cluster.
export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
Alternatively export these environment variables if you installed Nova using setup scripts provided in the try-nova repository.
export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2
Environment variable names with prefix NOVA_
refer to the custom resource Cluster
in Nova.
Cluster context names with prefix K8S_
refer to the underlying Kubernetes clusters.
First, we list the workload clusters connected to the Nova Control Plane:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME K8S-VERSION K8S-CLUSTER REGION ZONE READY IDLE STANDBY
wlc-1 1.32 nova-wlc-1 us-central1 us-central1-f True True False
wlc-2 1.32 nova-wlc-2 us-central1 us-central1-c True True False
We want to ensure that this namespace:
apiVersion: v1
kind: Namespace
metadata:
name: istio-system
labels:
release: istio
will be present in each workload cluster with a topology.istio.io/network
label but with different value in each cluster.
Let's take a look at the SchedulePolicy manifest we will use in this tutorial:
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: spread-namespace-policy
spec:
namespaceSelector: {}
resourceSelectors:
labelSelectors:
- matchLabels:
release: istio # this makes sure we will match our istio-system namspace, because it has release=istio label
groupBy:
labelKey: release
clusterSelector: # this cluster selector selects two workload clusters: wlc-1 & wlc-2
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- ${NOVA_WORKLOAD_CLUSTER_1} # change it to your workload cluster name
- ${NOVA_WORKLOAD_CLUSTER_2} # change it to your workload cluster name
spreadConstraints:
spreadMode: Duplicate # this means that each cluster selected in clusterSelector will get a duplicate of the object(s) matched by this policy
# our topologyKey is kubernetes.io/metadata.name label, which will have unique value for each workload cluster.
# This means that each workload cluster will get a duplicated of matched object(s).
topologyKey: kubernetes.io/metadata.name
# here we specify overrides per cluster
overrides:
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=wlc-1),
# we will apply following workloads overrides:
- topologyValue: ${NOVA_WORKLOAD_CLUSTER_1} # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "west-network" value
staticValue: west-network
# for a workload cluster which has topologyKey=topologyValue (in this case kubernetes.io/metadata.name=wlc-2),
# we will apply following workloads overrides:
- topologyValue: ${NOVA_WORKLOAD_CLUSTER_2} # change it to your workload cluster name
resources:
# we will override v1/Namespace object named istio-system
- kind: Namespace
apiVersion: v1
name: istio-system
# in istio-system namespace, we can override multiple fields, in this case we will override only one
override:
# here we select which field we want to override. In this case it is "topology.istio.io/network" label.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
# we will override it with "central-network" value
staticValue: central-network
First, we create a SchedulePolicy and the istio-system
namespace in the Nova Control Plane:
envsubst < "examples/sample-spread-scheduling/policy-override.yaml" > "./policy-override.yaml"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f ./policy-override.yaml
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f examples/sample-spread-scheduling/namespace-override.yaml
We expect to see the istio-system
namespace in both workload clusters:
- in the first workload cluster,
wlc-1
,istio-system
namespace should have awest-network
label value. - in the second workload cluster,
wlc-2
,istio-system
namespace should have acentral-network
label value.
In the commands below, we check these values with kubectl
.
kubectl --context=${K8S_CLUSTER_CONTEXT_1} wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=west-network namespace istio-system --timeout=90s
kubectl --context=${K8S_CLUSTER_CONTEXT_2} wait '--for=jsonpath={.metadata.labels.topology\.istio\.io/network}'=central-network namespace istio-system --timeout=90s
We see that the overrides are applied correctly.
noteNumber of replicas for Deployments, ReplicaSets, StatefulSets and paralellism for Jobs cannot be overridden using overrides. Please use
.spec.spreadConstraints.percentageSplit
instead.
Cleanup
To delete all resources created in this tutorial, delete the namespace in the Nova Control Plane:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/sample-spread-scheduling/namespace-override.yaml
and then delete the Schedule Policy:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f ./policy-override.yaml
rm -f ./policy-override.yaml