Skip to main content
Version: v0.9.0

Policy-based Scheduling

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export K8S_CLUSTER_CONTEXT_1=kind-workload-1
export K8S_CLUSTER_CONTEXT_2=kind-workload-2
export K8S_HOSTING_CLUSTER_CONTEXT=kind-cp
export NOVA_WORKLOAD_CLUSTER_1=kind-workload-1
export NOVA_WORKLOAD_CLUSTER_2=kind-workload-2

Overview

Nova currently supports three ways to schedule a workload - annotation-based scheduling, policy based scheduling, and smart scheduling based on resource availability

Policy Based Scheduling Testing Example

Policy based scheduling is done via scheduling is through Nova's SchedulePolicy CRD. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources.

Both workload clusters are connected to the Nova Control Plane. We will use the names of those clusters in the SchedulePolicy. You can check how your clusters are named in the Nova Control Plane:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME                    K8S-VERSION   K8S-CLUSTER   REGION   ZONE   READY   IDLE   STANDBY
kind-workload-1 1.22 workload-1 True True False
kind-workload-2 1.22 workload-2 True True False

Manifests used in this tutorial can be found in examples directory in try-nova repository. If you installed Nova from release tarball, you should have those manifests already.

  1. Open examples/sample-policy/policy.yaml in text editor and edit line:
  clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_1} # change it to the name of one of your workload clusters
  1. Create a policy that will schedule any object with label app: redis or app: guestbook, schedule them to cluster ${NOVA_WORKLOAD_CLUSTER_1} (in your case, workload cluster name will be likely different).
envsubst < "examples/sample-policy/policy.yaml" > "./policy.yaml"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy.yaml
  1. This schedules the guestbook stateless application into ${NOVA_WORKLOAD_CLUSTER_1}.
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
  1. kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get all -n guestbook. You should be able to see something like the following:

    NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
    service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
    service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
    service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/frontend 3/3 3 3 83s
    deployment.apps/redis-follower 2/2 2 2 83s
    deployment.apps/redis-leader 1/1 1 1 83s

It may take a while until service frontend will get external-ip allocated. You can use kubectl wait to wait for: Deployment frontend being available:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment frontend --timeout=240s

Deployment redis-leader being available:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-leader --timeout=240s

Deployment redis-follower being available:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-follower --timeout=240s

External IP of the frontend service LoadBalancer being accessible:

kubectl wait -n guestbook service/frontend --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' --timeout=240s

The external-ip of the frontend service should lead you to the main page of the guestbook application.

Workload migration

Now let's say your ${NOVA_WORKLOAD_CLUSTER_1} will go through some maintenance and you want to migrate your guestbook application to ${NOVA_WORKLOAD_CLUSTER_2}. You can achieve this by editing the schedulePolicy:

  1. Open examples/sample-policy/policy_updated.yaml in text editor and edit line:
  clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_2} # change it to the name of one of your workload clusters

Update ${NOVA_WORKLOAD_CLUSTER_1} to ${NOVA_WORKLOAD_CLUSTER_2} (in your case, workload cluster name will be likely different). Then, apply edited policy to the Nova Control Plane:

envsubst < "examples/sample-policy/policy_updated.yaml" > "./policy_updated.yaml"
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy_updated.yaml
  1. You should be able to see your workload deleted from ${NOVA_WORKLOAD_CLUSTER_1} and recreated in ${NOVA_WORKLOAD_CLUSTER_2}.

Cleanup

Delete guestbook workloads:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook

and then delete schedule policy:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f ./policy.yaml
rm -f ./policy.yaml
rm -f ./policy_updated.yaml