Skip to main content
Version: v0.6.0

Configuration

SchedulePolicy CRD

Full example of SchedulePolicy:

apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: demo-policy
spec:
# namespace selector specifies namespace(s) where the matching resources are
# non-namespaced objects are matched as well.
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: microsvc-demo
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn # possible operators: In, NotIn, Exists, DoesNotExist
values:
- namespace-2
- namespace-3
- namespace-4
# cluster selector specifies the list of workload clusters (represented as Cluster Custom Resource)
# which will be considered as a hosting cluster for all resources matched by this policy.
# If more than one cluster is selected, Nova will try to pick a workload cluster which has enough resources
# to host object (or objects grouped into ScheduleGroup).
# If clusterSelector is not specified, Nova will consider all workload clusters.
clusterSelector:
matchLabels:
nova.elotl.co/cluster.region: "us-east-1"
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- kind-workload-2
- kind-workload-3
- kind-workload-4
# groupBy.labelKey specifies how the objects should be grouped.
# If labelKey is empty (default), Nova won't group objects into ScheduleGroup and will
# try to find a workload cluster for each object separately.
# If you specify groupBy.labelKey, Nova will create ScheduleGroups for each value of this label.
# This is convenient if you want to schedule multiple objects together (to the same workload cluster).
groupBy:
labelKey: color

# spreadConstraints enables spreading a group of objects onto multiple clusters.
# spreadConstraints.topologyKey refers to the Cluster CR label which should be used to group
# clusters into the topology domain. E.g. for topologyKey: nova.elotl.co/cluster.version
# clusters with nova.elotl.co/cluster.version=v1.22 will be treated as one topology domain.
# For kubernetes.io/metadata.name each cluster will be treated as one topology domain.
# percentageSplit defines spread constraints over the topology domain.
# Example below says:
# For all k8s resources with replicas matched by this policy (e.g. Deployment, ReplicaSet),
# take a list of workloads clusters matching this policy (see .clusterSelector),
# then, for cluster having kubernetes.io/metadata.name=kind-workload-1 label create a copy of k8s resources from this group
# and modify pod controllers' (Deployment, ReplicaSet, etc.) replicas to 20% of original replicas number.
# For cluster having kubernetes.io/metadata.name=kind-workload-2 label create a copy of k8s resources from this group
# and modify pod controllers' (Deployment, ReplicaSet, etc.) replicas to 80% of original replicas number.
spreadConstraints:
# available spreadModes are Divide and Replicate
# Divide gets the number of replicas for Deployments, ReplicaSets, StatefulSets, etc. or parallelism for Jobs
# and divides them between chosen clusters. In Divide mode it is guaranteed to run exactly the number of replicas
# that is specified in manifests.
# In Replicate mode, each workload clusters will run the original specified replica count.
# It means that if your Deployment has .spec.replicas set to 2 and the policy matches 3 workload clusters,
# each workload cluster will run 2 replicas, so you will end up running 6 replicas in total.
spreadMode: Divide
topologyKey: kubernetes.io/metadata.name
# percentageSplit is ignored for spreadMode: Replicate.
percentageSplit:
- topologyValue: kind-workload-1
percentage: 20
- topologyValue: kind-workload-2
percentage: 80

# resourceSelectors specify which resources match this policy.
# Using example below means
resourceSelectors:
labelSelectors:
- matchLabels:
microServicesDemo: "yes"
matchExpressions:
- key: app.kubernetes.io
operator: Exists
values: []

This SchedulePolicy will:

  • match all namespaced objects with label microServicesDemo: "yes" and app.kubernetes.io (regardless of label value) in the namespace microsvc-demo
  • match all non-namespaced objects with label microServicesDemo: "yes" and app.kubernetes.io (regardless of label value)
  • Then, objects will be grouped into N groups based on each object's value of the label color (specified in .groupBy.labelKey).
  • Then, for each ScheduleGroup (e.g. group of all matched objects having color: blue label) Nova will try to pick a workload cluster, following guidelines specified in .spec.clusterSelector:
    • For all Cluster(s) having nova.elotl.co/cluster.region=us-east-1 label, and not being named kind-workload-2, kind-workload-3 or kind-workload-4 Nova will check if a sum of resources (CPU, memory, GPU) required by objects in the ScheduleGroup is smaller than available resources in the Clusters selected. If there is such cluster, Nova will pick this workload cluster for this ScheduleGroup.