Skip to main content
Version: v0.9.0

Configuration

Nova's scheduling of workloads is configured using the Custom Resource SchedulePolicy. In this section, we show a detailed example of a Schedule Policy with all its fields described via inline comments.

SchedulePolicy

apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: demo-policy
spec:
# namespace selector specifies namespace(s) where the matching resources are
# non-namespaced objects are matched as well.
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: microsvc-demo
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn # possible operators: In, NotIn, Exists, DoesNotExist
values:
- namespace-2
- namespace-3
- namespace-4
# cluster selector specifies the list of workload clusters (represented as Cluster Custom Resource)
# which will be considered as a hosting cluster for all resources matched by this policy.
# If more than one cluster is selected, Nova will try to pick a workload cluster which has enough resources
# to host object (or objects grouped into ScheduleGroup).
# If clusterSelector is not specified, Nova will consider all workload clusters.
clusterSelector:
matchLabels:
nova.elotl.co/cluster.region: "us-east-1"
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- kind-workload-2
- kind-workload-3
- kind-workload-4
# groupBy.labelKey specifies how the objects should be grouped.
# If labelKey is empty (default), Nova won't group objects into ScheduleGroup and will
# try to find a workload cluster for each object separately.
# If you specify groupBy.labelKey, Nova will create ScheduleGroups for each value of this label.
# This is convenient if you want to schedule multiple objects together (to the same workload cluster).
groupBy:
labelKey: color

# spreadConstraints enables spreading a group of objects (.spec.groupBy has to be set) onto multiple clusters.
# spreadConstraints.topologyKey refers to the Cluster CR label which should be used to group
# clusters into the topology domain. E.g. for topologyKey: nova.elotl.co/cluster.version
# clusters with nova.elotl.co/cluster.version=v1.22 will be treated as one topology domain.
# For kubernetes.io/metadata.name each cluster will be treated as one topology domain.
# percentageSplit defines spread constraints over the topology domain.
# Example below says:
# For all k8s resources with replicas matched by this policy (e.g. Deployment, ReplicaSet),
# take a list of workloads clusters matching this policy (see .clusterSelector),
# then, for cluster having kubernetes.io/metadata.name=kind-workload-1 label create a copy of k8s resources from this group
# and modify pod controllers' (Deployment, ReplicaSet, etc.) replicas to 20% of original replicas number.
# For cluster having kubernetes.io/metadata.name=kind-workload-2 label create a copy of k8s resources from this group
# and modify pod controllers' (Deployment, ReplicaSet, etc.) replicas to 80% of original replicas number.
spreadConstraints:
# available spreadModes are Divide and Duplicate
# Divide gets the number of replicas for Deployments, ReplicaSets, StatefulSets, etc. or parallelism for Jobs
# and divides them between chosen clusters. In Divide mode it is guaranteed to run exactly the number of replicas
# that is specified in manifests.
# In Duplicate mode, each workload clusters will run the original specified replica count.
# It means that if your Deployment has .spec.replicas set to 2 and the policy matches 3 workload clusters,
# each workload cluster will run 2 replicas, so you will end up running 6 replicas in total.
spreadMode: Divide
topologyKey: kubernetes.io/metadata.name
# percentageSplit is ignored for spreadMode: Duplicate. Sum of .percentageSplit.percentage values has to equal 100.
percentageSplit:
- topologyValue: kind-workload-1
percentage: 20
- topologyValue: kind-workload-2
percentage: 80
# You can use Overrides to configure customization of the particular objects managed by this policy, per each cluster.
# This is useful for cases when e.g. you want to have almost exactly the same namespace in each cluster - but each one has a different label key,
# or you need to spread a StatefulSet / Deployment across clusters and in each cluster you need to set up unique identifier as a e.g. command line argument.
# Here is an example how overrides can be used to create a same namespace in all clusters but each labeled with different istio network annotation.
# Original object need to have this label key set, with a placeholder value.
overrides:
- topologyValue: kind-workload-1
resources:
- kind: Namespace
apiVersion: v1
name: nginx-spread-3
override:
# Field paths reference a field within a Kubernetes object via a simple string.
# API conventions describe the syntax as "standard JavaScript syntax for
# accessing that field, assuming the JSON object was transformed into a
# JavaScript object, without the leading dot, such as metadata.name".
# Valid examples:
# * metadata.name
# * spec.containers[0].name
# * data[.config.yml]
# * metadata.annotations['crossplane.io/external-name']
# * spec.items[0][8]
# * apiVersion
# * [42]
# Invalid examples:
# * .metadata.name - Leading period.
# * metadata..name - Double period.
# * metadata.name. - Trailing period.
# * spec.containers[] - Empty brackets.
# * spec.containers.[0].name - Period before open bracket.
- fieldPath: metadata.labels['topology.istio.io/network']
value:
staticValue: network-1
- topologyValue: kind-workload-2
resources:
- kind: Namespace
apiVersion: v1
name: nginx-spread-3
override:
- fieldPath: metadata.labels['topology.istio.io/network']
value:
staticValue: network-2

# resourceSelectors specify which resources match this policy.
# Using example below means
resourceSelectors:
labelSelectors:
- matchLabels:
microServicesDemo: "yes"
matchExpressions:
- key: app.kubernetes.io
operator: Exists
values: []

This SchedulePolicy will:

  • match all namespaced objects with label microServicesDemo: "yes" and app.kubernetes.io (regardless of label value) in the namespace microsvc-demo
  • match all non-namespaced objects with label microServicesDemo: "yes" and app.kubernetes.io (regardless of label value)
  • Then, objects will be grouped into N groups based on each object's value of the label color (specified in .groupBy.labelKey).
  • Then, for each ScheduleGroup (e.g. group of all matched objects having color: blue label) Nova will try to pick a workload cluster, following guidelines specified in .spec.clusterSelector:
    • For all Cluster(s) having nova.elotl.co/cluster.region=us-east-1 label, and not being named kind-workload-2, kind-workload-3 or kind-workload-4 Nova will check if a sum of resources (CPU, memory, GPU) required by objects in the ScheduleGroup is smaller than available resources in the Clusters selected. If there is such cluster, Nova will pick this workload cluster for this ScheduleGroup.