Policy-based Scheduling
Overview
Nova currently supports two ways to schedule a workload - annotation-based scheduling and policy-based scheduling. Under policy-based scheduling, Nova provides capacity-based scheduling policies (or resource-aware scheduling) and spread scheduling policies.
Policy Based Scheduling Testing Example
Policy-based scheduling is done via Nova's SchedulePolicy
Custom Resource. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources.
We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.
export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2
Export these additional environment variables if you installed Nova using the tarball.
export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2
Alternatively export these environment variables if you installed Nova using setup scripts provided in the try-nova repository.
export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2
We check that both the workload clusters are connected to the Nova Control Plane. We will use the names of these clusters in the SchedulePolicy.
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME K8S-VERSION K8S-CLUSTER REGION ZONE READY IDLE STANDBY
wlc-1 1.28 wlc-1 True True False
wlc-2 1.28 wlc-2 True True False
Manifests used in this tutorial can be found in the examples directory in try-nova repository. If you installed Nova from release tarball, you should have those manifests already.
- We will first create a policy that will schedule any object with label
app: redis
orapp: guestbook
, to the workload cluster${NOVA_WORKLOAD_CLUSTER_1}
(In your case, the workload cluster name could be different).
If you have the envsubst
tool installed locally, you can use it to replace the workload cluster name in the policy manifest:
envsubst < "examples/sample-policy/policy.yaml" > "./policy.yaml"
Alternatively, the policy manifest, examples/sample-policy/policy.yaml
can be edited to replace ${NOVA_WORKLOAD_CLUSTER_1}
with the name of your workload cluster:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_1} # change it to the name of one of your workload clusters
- The updated policy is created as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy.yaml
- Next, we schedule the guestbook application into the workload cluster,
${NOVA_WORKLOAD_CLUSTER_1}
.
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
- All the k8s components of the guestbook app can be viewed as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get all -n guestbook
.
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 3/3 3 3 83s
deployment.apps/redis-follower 2/2 2 2 83s
deployment.apps/redis-leader 1/1 1 1 83s
```
It may take a while until the service frontend
gets an external IP address allocated to it. You can use kubectl wait
to wait for each of the components to become available:
- Deployment
frontend
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment frontend --timeout=240s
- Deployment
redis-leader
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-leader --timeout=240s
- Deployment
redis-follower
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-follower --timeout=240s
- Wait for the external IP of the
frontend
service LoadBalancer to become available:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait -n guestbook service/frontend --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' --timeout=240s
The external IP of the frontend service will lead you to the main page of the Guestbook application.
If you are running this tutorial on Kind clusters, you can use kubectl port-forward
to be able to access the application on a local browser:
- Get the Kubeconfig of the target Kind workload cluster:
kind get kubeconfig --name ${NOVA_WORKLOAD_CLUSTER_1} > kubeconfig-wlc-1
- Get all the components of the Guestbook application on the workload cluster:
kubectl --kubeconfig=./kubeconfig-wlc-1 get all -n guestbook
- Port-forward to view the application:
kubectl --kubeconfig=./kubeconfig-wlc-1 port-forward service/frontend 8080:80 -n guestbook
You will then be able to access your Guestbook application on a browser at the URL: http://localhost:8080/
Workload migration
Now let's say your ${NOVA_WORKLOAD_CLUSTER_1}
will be going through maintenance and you would like to migrate your guestbook application to ${NOVA_WORKLOAD_CLUSTER_2}
.
You can achieve this by updating the Schedule Policy:
- If you have the
envsubst
tool installed locally, you can use it to replace the workload cluster name in the updated policy manifest:
envsubst < "examples/sample-policy/policy_updated.yaml" > "./policy_updated.yaml"
Alternatively, the policy manifest, examples/sample-policy/policy_updated.yaml
can be edited to replace ${NOVA_WORKLOAD_CLUSTER_2}
with the name of your workload cluster:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_2} # change it to the name of one of your workload clusters
Then, apply the edited policy to the Nova Control Plane:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy_updated.yaml
- You should now be able to see your workload deleted from
${NOVA_WORKLOAD_CLUSTER_1}
and recreated in${NOVA_WORKLOAD_CLUSTER_2}
.
By using the kubeconfig of the second workload cluster you will be able to check that the migration took place as expected. If you are running this tutorial on Kind clusters, you can verify this as follows:
- Get the Kubeconfig of the new target workload cluster:
kind get kubeconfig --name ${NOVA_WORKLOAD_CLUSTER_2} > kubeconfig-wlc-2
- Get all the components of the Guestbook application on this workload cluster:
kubectl --kubeconfig=./kubeconfig-wlc-1 get all -n guestbook
Cleanup
You can delete the Guestbook application as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
and then delete the corresponding Schedule Policy:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f ./policy.yaml
rm -f ./policy.yaml
rm -f ./policy_updated.yaml