Quickstart Tutorial
Nova currently supports two ways to schedule a workload - annotation-based scheduling and policy based scheduling.
In annotation-based scheduling, you specify an annotation in the workload manifest. The annotation tells Nova which workload cluster should run the workload.
- If you used different names for your clusters, open
./sample-workloads/nginx.yaml
and edit annotationnova.elotl.co/cluster: my-workload-cluster-1
by replacingmy-workload-cluster-1
with name of one of your workload clusters. - Run
kubectl --context=nova apply -f sample-workloads/nginx.yaml
- Run
kubectl --context=nova get deployments
should show the nginx deployment is up and running. - Now you should be able to see there are two pods running in your workload cluster.
- Note that there will be no pod running in the nova control plane cluster -
kubectl --context=nova get pods
should show no pod.
Updating/Deleting through nova
You can also modify or delete a workload through Nova and nova will automatically update the corresponding objects in the workload cluster. Use the nginx deployment for the example:
- Run
kubectl --context=nova edit deployment nginx
, and change the replica from 2 to 3. - In your workload cluster, there should be 3 nginx pods running.
- Run
kubectl --context=nova get deployments
, and you should be able to see 3 replicas running.
Deleting a workload in nova will result in the workload deleted from the workload cluster too:
- Run
kubectl --context=nova delete deployment nginx
. - You should be able to see the nginx deployment deleted both from nova control plane and your workload cluster.
Policy-based scheduling
Another way to specify scheduling is through Nova's SchedulePolicy CRD. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources. Currently, we support static placement where the user tells Nova the destination workload cluster and dynamic scheduling based on resource availability.
kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-1 create namespace guestbook
kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-2 create namespace guestbook
kubectl --context=nova create namespace guestbook
kubectl --context=nova apply -f sample-policy/policy.yaml
. This policy is saying, for any objects with labelapp: redis
orapp: guestbook
, schedule them to clustermy-workload-cluster-1
.kubectl --context=nova get schedulepolicies -n guestbook -o go-template-file=kubectl_templates/schedulepolicies.gotemplate
You can verify if policy was created:NOVA POLICY NAME NOVA WORKLOAD CLUSTER LABEL SELECTOR(s)
------------------ -------------------------------------- --------------------------------------
app-guestbook my-workload-cluster-1
app=redis
app=guestbook
app=busybox
------------------ -------------------------------------- --------------------------------------kubectl --context=nova apply -f sample-policy/guestbook-all-in-one.yaml -n guestbook
. This schedules the guestbook stateless application intomy-workload-cluster-1
.kubectl --context=nova get all -n guestbook
. You should be able to see something like the following:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 3/3 3 3 83s
deployment.apps/redis-follower 2/2 2 2 83s
deployment.apps/redis-leader 1/1 1 1 83s
The external-ip of the frontend service should lead you to the main page of the guestbook application.
Workload migration
Now let's say your my-workload-cluster-1
will go through some maintenance and you want to migrate your guestbook application to my-workload-cluster-2
.
You can achieve this by editing the schedulePolicy:
kubectl --context=nova edit schedulepolicy app-guestbook -n guestbook
. Updatemy-workload-cluster-1
tomy-workload-cluster-2
.- You should be able to see your workload deleted from
my-workload-cluster-1
and recreated inmy-workload-cluster-2
.