Policy-based Scheduling
Overview
Nova currently supports three ways to schedule a workload - annotation-based scheduling, policy based scheduling, and smart scheduling based on resource availability
Policy Based Scheduling Testing Example
Policy based scheduling is done via scheduling is through Nova's SchedulePolicy CRD. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources.
Both workload clusters are connected to the Nova Control Plane. We will use the names of those clusters in the SchedulePolicy. You can check how your clusters are named in the Nova Control Plane:
kubectl --context=nova get clusters
NAME K8S-VERSION K8S-CLUSTER REGION ZONE READY IDLE STANDBY
kind-workload-1 1.22 workload-1 True True False
kind-workload-2 1.22 workload-2 True True False
Manifests used in this tutorial can be found in examples directory in try-nova repository. If you installed Nova from release tarball, you should have those manifests already.
- Open
examples/sample-policy/policy.yaml
in text editor and edit line:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: kind-workload-1 # change it to the name of one of your workload clusters
kubectl --context=nova apply -f examples/sample-policy/policy.yaml
This policy is saying, for any objects with label app: redis
or app: guestbook
, schedule them to cluster kind-workload-1
(in your case, workload cluster name will be likely different).
3.
kubectl --context=nova apply -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
This schedules the guestbook stateless application into kind-workload-1
.
4. kubectl --context=nova get all -n guestbook
. You should be able to see something like the following:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 3/3 3 3 83s
deployment.apps/redis-follower 2/2 2 2 83s
deployment.apps/redis-leader 1/1 1 1 83s
```
It may take a while until service frontend
will get external-ip allocated. You can use kubectl wait
to wait for:
Deployment frontend
being available:
kubectl --context=nova wait --for=condition=available -n guestbook deployment frontend --timeout=180s
Deployment redis-leader being available:
kubectl --context=nova wait --for=condition=available -n guestbook deployment redis-leader --timeout=180s
Deployment redis-follower being available:
kubectl --context=nova wait --for=condition=available -n guestbook deployment redis-follower --timeout=180s
External IP of the frontend
service LoadBalancer being accessible:
kubectl wait -n guestbook service/frontend --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' --timeout=180s
The external-ip of the frontend service should lead you to the main page of the guestbook application.
Workload migration
Now let's say your kind-workload-1
will go through some maintenance and you want to migrate your guestbook application to kind-workload-2
.
You can achieve this by editing the schedulePolicy:
- Open
examples/sample-policy/policy_updated.yaml
in text editor and edit line:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: kind-workload-2 # change it to the name of one of your workload clusters
Update kind-workload-1
to kind-workload-2
(in your case, workload cluster name will be likely different).
Then, apply edited policy to the Nova Control Plane:
kubectl --context=nova apply -f examples/sample-policy/policy_updated.yaml
- You should be able to see your workload deleted from
kind-workload-1
and recreated inkind-workload-2
.
Cleanup
Delete guestbook workloads:
kubectl --context=nova delete -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
and then delete schedule policy:
kubectl --context=nova delete -f examples/sample-policy/policy.yaml