Smart Scheduling (Beta)
Overview
Nova currently supports three ways to schedule a workload - annotation-based scheduling, policy based scheduling, and smart scheduling based on resource availability
Smart Scheduling Based on Resource Availability Testing Example
Nova also supports smart group scheduling, which means scheduling a group of k8s objects to any cluster which has enough resources to host it.
In this exercise we will observe how Nova groups k8s objects into a ScheduleGroup and finds a workload cluster for a whole group.
Let's say you have a group of microservices, which combine into an application. You have a few workload clusters connected and you want to run this app in any cluster which has enough vCPU and memory, as long as all microservices will run in the same cluster.
For this use, you can define SchedulePolicy with .spec.groupScheduling: true
and no direct cluster assignment (.spec.assignment.SingleCluster: ""
).
We will use GCP Microservice Demo App which includes 10 different microservices.
Total resources requested in this app is 1570 millicores of CPU and 1368 Mi of memory.
Let's start with creating a namespace that we will use:
kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-1 create namespace microsvc-demo
kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-2 create namespace microsvc-demo
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl create namespace microsvc-demo
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-group-scheduling/policy.yaml -n microsvc-demo
This policy is saying, for any objects with labelmicroServicesDemo: "yes"
, group them to any cluster which has enough resources to run them.If schedule policy was created successfully, you should see that the first schedule group was created automatically and is waiting for any matching objects.
KUBECONFIG=./nova-installer-output/nova-kubeconfig k get schedulegroups -n microsvc-demo -o go-template-file=kubectl_templates/schedulegroups.gotemplate
NAME NOVA WORKLOAD CLUSTER NOVA POLICY NAME
------------------ -------------------------------------- --------------------------------------
demo-policy-1 Not assigned yet. demo-policy
------------------ -------------------------------------- --------------------------------------Let's create demo blue app:
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-group-scheduling/blue-app.yaml -n microsvc-demo
. Every k8s object in this file has label matching the one we defined earlier in schedule policy:microServicesDemo: "yes"
.Let's verify if created objects were matched with schedule policy. When we describe any kind (e.g. Service) with this label, we should see
SchedulePolicyMatched
event:KUBECONFIG=./nova-installer-output/nova-kubeconfig k describe svc -n microsvc-demo -l microServicesDemo=yes
...
Name: shippingservice
Namespace: microsvc-demo
Labels: instance=one
microServicesDemo=yes
Annotations: <none>
Selector: app=shippingservice
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.115.157
IPs: 10.96.115.157
Port: grpc 50051/TCP
TargetPort: 50051/TCP
Endpoints: <none>
Session Affinity: None
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SchedulePolicyMatched 4m52s (x724 over 34m) nova-scheduler schedule policy demo-policy will be used to determine target clusterNow, let's see if all microservices were added to the ScheduleGroup:
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl describe schedulegroup -n microsvc-demo
Name: demo-policy-1
Namespace: microsvc-demo
Labels: nova.elotl.co/matching-policy=demo-policy
nova.elotl.co/target-cluster=my-workload-cluster-1
Annotations: <none>
API Version: policy.elotl.co/v1alpha1
Kind: ScheduleGroup
Last Status Update: 2022-10-17T12:37:45Z
Object Refs:
Group: apps
Kind: Deployment
Name: emailservice
Namespace: microsvc-demo
Version: v1
...(...)...
Group:
Kind: Service
Name: adservice
Namespace: microsvc-demo
Version: v1
Policy:
Name: demo-policy
Namespace: microsvc-demo
Scheduled: true
Events: <none>As you can see, the group is already scheduled (check
Scheduled
field) and the blue app is running inmy-workload-cluster-1
(checknova.elotl.co/target-cluster
label). Now, let's list schedule groups again:KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get schedulegroups -n microsvc-demo -o go-template-file=kubectl_templates/schedulegroups.gotemplate
NAME NOVA WORKLOAD CLUSTER NOVA POLICY NAME
------------------ -------------------------------------- --------------------------------------
demo-policy-1 my-workload-cluster-1 demo-policy
demo-policy-2 Not assigned yet. demo-policy
------------------ -------------------------------------- --------------------------------------As you can see, first schedule group (containing blue app) was scheduled to the
my-workload-cluster-1
. Second schedule group (demo-policy-2
) was created immediately after the first group was scheduled successfully. When you add new schedule policy, first group is created automatically. Any matching objects will get added to first group. Any new object matching schedule policy selector is added as reference to schedule group (so schedule group gets updated). If there were no updates (no new objects matching) for 30 seconds, then nova-scheduler will try to find a cluster which has enough resources to run all microservices from group. You should be able to see objects running in the cluster specified in the output of describe ScheduleGroup command. Now, we will create green app with labels matching same schedule policy. It should be added to second schedule group.KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-group-scheduling/green-app.yaml -n microsvc-demo
Let's wait about 30 seconds and get schedule group to see in which workload cluster green app landed:
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get schedulegroups -n microsvc-demo -o go-template-file=kubectl_templates/schedulegroups.gotemplate
NAME NOVA WORKLOAD CLUSTER NOVA POLICY NAME
------------------ -------------------------------------- --------------------------------------
demo-policy-1 my-workload-cluster-1 demo-policy
demo-policy-2 my-workload-cluster-2 demo-policy
demo-policy-3 Not assigned yet. demo-policy
------------------ -------------------------------------- --------------------------------------Looks like schedule group
demo-policy-2
was scheduled tomy-workload-cluster-2
. NOTE: Depending on resources in your workload clusters, schedule group may be scheduled to any workload cluster with enough resources. Now, imagine you need to increase resource request or replica count on one of the microservices in the second app. In the meantime, there was other activity in the cluster and after your update there won't be enough resources in the cluster to satisfy your update. You can simulate this scenario usingsample-group-scheduling/hog-pod.yaml
manifest. You should edit it, so the hog-pod will take almost all resources in your cluster. Now, you can apply it to the same cluster wheredemo-policy-2
schedule group was scheduled (my-workload-cluster-2
in my case).kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-2 apply -f sample-group-scheduling/hog-pod.yaml
Now let's increase replica count in frontend-2 microservice (which is one of the microservices in green app) in the Nova control plane:
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl scale deploy/frontend-2 --replicas=5 -n microsvc-demo
If there is enough resources to satisfy new schedule group requirements (existing resource request for 9 microservices + increased replica count of
frontend-2
), watching schedule group will show you schedule group being rescheduled to another cluster:KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get schedulegroups -n microsvc-demo -o go-template-file=kubectl_templates/schedulegroups.gotemplate
NAME NOVA WORKLOAD CLUSTER NOVA POLICY NAME
------------------ -------------------------------------- --------------------------------------
demo-policy-1 my-workload-cluster-1 demo-policy
demo-policy-2 my-workload-cluster-1 demo-policy
demo-policy-3 Not assigned yet. demo-policy
------------------ -------------------------------------- --------------------------------------You can verify that green app is running by listing deployment in Nova Control Plane:
KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get deployments -n microsvc-demo -l instance=green
NAME READY UP-TO-DATE AVAILABLE AGE
adservice-2 1/1 1 1 2m1s
cartservice-2 1/1 1 1 2m2s
checkoutservice-2 1/1 1 1 2m3s
currencyservice-2 1/1 1 1 2m2s
emailservice-2 1/1 1 1 2m3s
frontend-2 5/5 5 5 2m3s
loadgenerator-2 1/1 1 1 2m2s
paymentservice-2 1/1 1 1 2m3s
productcatalogservice-2 1/1 1 1 2m2s
recommendationservice-2 1/1 1 1 2m3s
redis-cart-2 1/1 1 1 2m1s
shippingservice-2 1/1 1 1 2m2sTo remove all objects created for this demo, remove
microsvc-demo
namespace:KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl delete ns microsvc-demo