Skip to main content
Version: v1.1

Policy-based scheduling

In policy-based scheduling, users define a custom Kubernetes resource called the Schedule Policy to specify where Kubernetes resources will be scheduled.

A Schedule Policy specifies the following aspects about workload placement:

  1. Namespace selector: This field specifies which namespace' Kubernetes objects will be considered for scheduling in accordance with the current policy.
  2. Resource Selector: This field specifies which Kubernetes objects will be scheduled in accordance with the current policy.
  3. Cluster Selector: This field specifies the list of workload clusters that will considered for scheduling Kubernetes resources.

Let's look in detail into how each of these selectors can be specified:

Namespace selector

  1. This namespace selector specifies that all Kubernetes resources in the microsvc-demo namespace will be scheduled by the policy in which it is included.
    spec:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: microsvc-demo
  1. This namespace selector specifies that this schedule policy applies to all Kubernetes resources in namespaces: namespace-1 and namespace-2
    spec:
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- namespace-1
- namespace-2

Other supported operators are: NotIn, Exists, DoesNotExist

Resource Selector

The resource selector allows the user to specify labels or expressions using key value pairs to describe all the Kubernetes resources that should match this policy.

  1. To define a Schedule policy that applies to all Kubernetes objects that have the label microServicesDemo: "yes", use this resource selector:
  resourceSelectors:
labelSelectors:
- matchLabels:
microServicesDemo: "yes"
  1. To define a Schedule policy that applies to all Kubernetes objects that have the key app.kubernetes.io, use this resource selector:
  resourceSelectors:
labelSelectors:
- matchExpressions:
- key: app.kubernetes.io
operator: Exists
values: []

Cluster selector

Clusters to be considered for scheduling can be specified using labels or expressions. The Cluster Selector is an optional field. If it is not included in a Schedule Policy, Nova will consider all workload clusters for placing Kubernetes resources. We can use cluster properties (such as name, cluster provider, kubernetes version, cluster region, zone, etc.) exposed by Nova as Cluster object labels (see Nova-specific Labels and Annotations), to instrument where the workloads should run.

  1. This cluster selector specifies that all workloads matched by the encompassing Schedule policy will be placed on any workload cluster that has the label: nova.elotl.co/cluster.region: "us-east-1"
  clusterSelector:
matchLabels:
nova.elotl.co/cluster.region: "us-east-1"
  1. This cluster selector specifies that all workloads matched by the encompassing Schedule policy will be placed on any workload cluster except clusters named prod-eks-us-west-1 and prod-eks-us-west-2.

If more than one cluster is selected, Nova will pick a workload cluster that has sufficient resources to place the Kubernetes object

  clusterSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- prod-eks-us-west-1
- prod-eks-us-west-2
  1. This cluster selector specifies that all workloads matched by the encompassing Schedule policy will be placed on any workload cluster in the version v1.25.0:
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
...
spec:
clusterSelector:
matchLabels:
nova.elotl.co/cluster.version: "v1.25.0"
resourceSelectors:
labelSelectors:
...
namespaceSelector:
...
...
  1. This cluster selector specifies that all workloads matched by the encompassing Schedule policy will be placed on the workload cluster named cluster-one
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
...
spec:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: "cluster-one"
resourceSelectors:
labelSelectors:
...
namespaceSelector:
...
...
  1. This cluster selector specifies that all workloads matched by the encompassing Schedule policy will be placed on any workload cluster which is not running in region us-east-1:
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
...
spec:
clusterSelector:
matchExpressions:
- key: nova.elotl.co/cluster.region
operator: NotIn
values:
- us-east-1
resourceSelectors:
labelSelectors:
...
namespaceSelector:
...
...
  1. You can also set custom labels to the Cluster objects and use them. For example if you mix on-prem and cloud clusters, and want to schedule workloads in on-prem clusters only, you can label all on-prem clusters objects with on-prem: true and use it in SchedulePolicy:
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
...
spec:
clusterSelector:
matchLabels:
on-prem: "true"
resourceSelectors:
labelSelectors:
...
namespaceSelector:
...
...