Skip to main content
Version: v0.3.0

Quickstart Tutorial for Nova Tutorial

Annotation based scheduling

Nova currently supports two ways to schedule a workload - annotation-based scheduling and policy based scheduling.

In annotation-based scheduling, you specify an annotation in the workload manifest. The annotation tells Nova which workload cluster should run the workload.

  1. If you used different names for your clusters, open ./sample-workloads/nginx.yaml and edit annotation nova.elotl.co/cluster: my-workload-cluster-1 by replacing my-workload-cluster-1 with name of one of your workload clusters.
  2. Run KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-workloads/nginx.yaml
  3. Run KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get deployments should show the nginx deployment is up and running.
  4. Now you should be able to see there are two pods running in your workload cluster.
  5. Note that there will be no pod running in the nova control plane cluster - KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get pods should show no pod.

Updating/Deleting through nova

You can also modify or delete a workload through Nova and nova will automatically update the corresponding objects in the workload cluster. Use the nginx deployment for the example:

  1. Run KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl edit deployment nginx, and change the replica from 2 to 3.
  2. In your workload cluster, there should be 3 nginx pods running.
  3. Run KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get deployments, and you should be able to see 3 replicas running.

Deleting a workload in nova will result in the workload deleted from the workload cluster too:

  1. Run KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl delete deployment nginx.
  2. You should be able to see the nginx deployment deleted both from nova control plane and your workload cluster.

Policy based scheduling

Another way to specify scheduling is through Nova's SchedulePolicy CRD. A schedule policy contains one or more resource selectors, and a placement to tell how the scheduling should happen for matching resources. Currently, we only support static placement where the user tells Nova the destination workload cluster. Dynamic scheduling based on resource availability/cost/custom metrics is on the roadmap.

  1. kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-1 create namespace guestbook

  2. kubectl --context=gke_elotl-dev_us-central1-c_nova-example-agent-2 create namespace guestbook

  3. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl create namespace guestbook

  4. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-policy/policy.yaml -n guestbook. This policy is saying, for any objects with label app: redis or app: guestbook, schedule them to cluster my-workload-cluster-1.

  5. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get schedulepolicies -n guestbook -o go-template-file=kubectl_templates/schedulepolicies.gotemplate You can verify if policy was created:

    NOVA POLICY NAME    NOVA WORKLOAD CLUSTER                   LABEL SELECTOR(s)
    ------------------ -------------------------------------- --------------------------------------
    app-guestbook my-workload-cluster-1
    app=redis
    app=guestbook
    app=busybox
    ------------------ -------------------------------------- --------------------------------------
  6. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl apply -f sample-policy/guestbook-all-in-one.yaml -n guestbook. This schedules the guestbook stateless application into my-workload-cluster-1.

  7. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl get all -n guestbook. You should be able to see something like the following:

    NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
    service/frontend LoadBalancer 10.96.25.97 35.223.90.60 80:31528/TCP 82s
    service/redis-follower ClusterIP 10.96.251.47 <none> 6379/TCP 83s
    service/redis-leader ClusterIP 10.96.27.169 <none> 6379/TCP 83s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/frontend 3/3 3 3 83s
    deployment.apps/redis-follower 2/2 2 2 83s
    deployment.apps/redis-leader 1/1 1 1 83s

The external-ip of the frontend service should lead you to the main page of the guestbook application.

Workload migration

Now let's say your my-workload-cluster-1 will go through some maintenance and you want to migrate your guestbook application to my-workload-cluster-2. You can achieve this by editing the schedulePolicy:

  1. KUBECONFIG=./nova-installer-output/nova-kubeconfig kubectl edit schedulepolicy app-guestbook -n guestbook. Update my-workload-cluster-1 to my-workload-cluster-2.
  2. You should be able to see your workload deleted from my-workload-cluster-1 and recreated inmy-workload-cluster-2.