Policy-based Scheduling
Overview
Nova currently supports two ways to schedule a workload - annotation-based scheduling and policy-based scheduling. Under policy-based scheduling, Nova provides capacity-based scheduling policies (or resource-aware scheduling) and spread scheduling policies.
Policy Based Scheduling Testing Example
Policy-based scheduling is done via Nova's SchedulePolicy
Custom Resource. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources.
We will first export these environment variables so that subsequent steps in this tutorial can be easily followed.
export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export NOVA_WORKLOAD_CLUSTER_1=wlc-1
export NOVA_WORKLOAD_CLUSTER_2=wlc-2
Export this additional environment variable if you installed Nova using the tarball. You can optionally replace the value k8s-cluster-hosting-cp
with the context name of your Nova hosting cluster.
export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
Alternatively export these environment variables if you installed Nova using setup scripts provided in the try-nova repository.
export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export K8S_CLUSTER_CONTEXT_1=kind-wlc-1
export K8S_CLUSTER_CONTEXT_2=kind-wlc-2
Environment variable names with prefix NOVA_
refer to the custom resource Cluster
in Nova.
Cluster context names with prefix K8S_
refer to the underlying Kubernetes clusters.
We check that both the workload clusters are connected to the Nova Control Plane. We will use these cluster names (cluster
refers to the Cluster Custom resource) in the SchedulePolicy.
kubectl get --context=${NOVA_CONTROLPLANE_CONTEXT} clusters
kubectl get --context=${NOVA_CONTROLPLANE_CONTEXT} clusters
NAME K8S-VERSION K8S-CLUSTER REGION ZONE READY IDLE STANDBY
wlc-1 1.32 nova-wlc-1 us-central1 us-central1-f True True False
wlc-2 1.32 nova-wlc-2 us-central1 us-central1-c True True False
Manifests used in this tutorial can be found in the examples directory in try-nova repository. If you installed Nova from release tarball, you should have those manifests already.
- We will first create a policy that will schedule any object with label
app: redis
orapp: guestbook
, to the workload cluster${NOVA_WORKLOAD_CLUSTER_1}
(In your case, the workload cluster name could be different).
If you have the envsubst
tool installed locally, you can use it to replace the workload cluster name in the policy manifest:
envsubst < "examples/sample-policy/policy.yaml" > "./policy.yaml"
Alternatively, the policy manifest, examples/sample-policy/policy.yaml
can be edited to replace ${NOVA_WORKLOAD_CLUSTER_1}
with the name of your workload cluster:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_1} # change it to the name of one of your workload clusters
- The updated policy is created as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy.yaml
Check that the policy has been created:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get schedulepolicies
NAME AGE
app-guestbook 23s
- Next, we schedule the guestbook application into the workload cluster,
${NOVA_WORKLOAD_CLUSTER_1}
.
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
The output shows the list of resources that were created:
namespace/guestbook created
deployment.apps/redis-leader created
service/redis-leader created
deployment.apps/redis-follower created
service/redis-follower created
deployment.apps/frontend created
service/frontend created
- All the k8s components of the guestbook app can be viewed as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get all -n guestbook
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 3/3 3 3 83s
deployment.apps/redis-follower 2/2 2 2 83s
deployment.apps/redis-leader 1/1 1 1 83s
It may take a while until the service frontend
gets an external IP address allocated to it. You can use kubectl wait
to wait for each of the components to become available:
- Deployment
frontend
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment frontend --timeout=240s
- Deployment
redis-leader
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-leader --timeout=240s
- Deployment
redis-follower
:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait --for=condition=available -n guestbook deployment redis-follower --timeout=240s
- Wait for the external IP of the
frontend
service LoadBalancer to become available:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} wait -n guestbook service/frontend --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' --timeout=240s
The external IP of the frontend service will lead you to the main page of the Guestbook application.
If you are running this tutorial on Kind clusters, please follow the remaining steps in this section.
You can use kubectl port-forward
to be able to access the application on a local browser.
- Get the Kubeconfig of the target Kind workload cluster:
kind get kubeconfig --name ${NOVA_WORKLOAD_CLUSTER_1} > kubeconfig-wlc-1
- Get all the components of the Guestbook application on the workload cluster:
kubectl --kubeconfig=./kubeconfig-wlc-1 get all -n guestbook
- Port-forward to view the application:
kubectl --kubeconfig=./kubeconfig-wlc-1 port-forward service/frontend 8080:80 -n guestbook
You will then be able to access your Guestbook application on a browser at the URL: http://localhost:8080/
Workload migration
Workload migration is a key feature in Nova that allows users to easily migrate their Kubernetes workloads from one cluster to another by editing the Schedule policy associated with the workload of interest.
Let's say your ${NOVA_WORKLOAD_CLUSTER_1}
will be going through maintenance and you would like to migrate your guestbook application to ${NOVA_WORKLOAD_CLUSTER_2}
.
You can achieve this by updating the Schedule Policy:
- If you have the
envsubst
tool installed locally, you can use it to replace the workload cluster name in the updated policy manifest:
envsubst < "examples/sample-policy/policy_updated.yaml" > "./policy_updated.yaml"
Alternatively, the policy manifest, examples/sample-policy/policy_updated.yaml
can be edited to replace ${NOVA_WORKLOAD_CLUSTER_2}
with the name of your workload cluster:
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: ${NOVA_WORKLOAD_CLUSTER_2} # change it to the name of one of your workload clusters
Then, apply the edited policy to the Nova Control Plane:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} apply -f ./policy_updated.yaml
schedulepolicy.policy.elotl.co/app-guestbook configured
- You should now be able to see your workload deleted from
${NOVA_WORKLOAD_CLUSTER_1}
and recreated in${NOVA_WORKLOAD_CLUSTER_2}
.
You can check that the migration took place as expected:
kubectl --context=${K8S_CLUSTER_CONTEXT_2} get pods -n guestbook
NAME READY STATUS RESTARTS AGE
frontend-8f8bbff7-654vd 1/1 Running 0 3m5s
frontend-8f8bbff7-gsfsb 1/1 Running 0 3m5s
frontend-8f8bbff7-r9jh4 1/1 Running 0 3m5s
redis-follower-5d98d85b7f-mg6lk 1/1 Running 0 3m5s
redis-follower-5d98d85b7f-nb8pc 1/1 Running 0 3m5s
redis-leader-69fcf79d5c-nb9ss 1/1 Running 0 3m5s
If you are running this tutorial on Kind clusters, you can verify this as follows:
- Get the Kubeconfig of the new target workload cluster:
kind get kubeconfig --name ${NOVA_WORKLOAD_CLUSTER_2} > kubeconfig-wlc-2
- Get all the components of the Guestbook application on this workload cluster:
kubectl --kubeconfig=./kubeconfig-wlc-1 get all -n guestbook
Cleanup
You can delete the Guestbook application as follows:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f examples/sample-policy/guestbook-all-in-one.yaml -n guestbook
namespace "guestbook" deleted
deployment.apps "redis-leader" deleted
service "redis-leader" deleted
deployment.apps "redis-follower" deleted
service "redis-follower" deleted
deployment.apps "frontend" deleted
service "frontend" deleted
And then delete the corresponding Schedule Policy:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} delete -f ./policy.yaml
rm -f ./policy.yaml
rm -f ./policy_updated.yaml
schedulepolicy.policy.elotl.co "app-guestbook" deleted
Check that the policy was deleted:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get schedulepolicies
No resources found