Skip to main content
Version: v0.7.1

Quickstart Tutorial

Prerequisites

You have Nova Control Plane installed with at least two workload clusters connected to the Nova Control Plane. In case you haven't, please follow the installation instructions. In this tutorial, we will use k8s Service with LoadBalancer type, to showcase that you can access deployed application using external IP. For this to work, you need to have Ingress controller in your workload clusters. Most of the cloud provider Kubernetes clusters have it configured out of the box. For local clusters, such as KIND (Kubernetes-In-Docker) you may need to install it yourself (e.g. MetalLB)

Nova currently supports two ways to schedule a workload (or a group of workloads) - annotation-based scheduling and policy based scheduling. In this tutorial we assume that your Nova Control Plane kube config context is named nova. To follow this tutorial, either replace "--context=nova" in each command with your Nova Control Plane kube context name, or rename this context, using following command:

kubectl config rename-context [yourname] nova

Annotation-based scheduling

In annotation-based scheduling, you specify an annotation in the workload manifest. The annotation tells Nova which workload cluster should run the workload.

First, let's check which workload clusters are connected to Nova and ready.

kubectl --context=nova get clusters
NAME                    K8S-VERSION   K8S-CLUSTER   REGION      ZONE          READY   IDLE   STANDBY
my-workload-cluster-1 1.25 us-central1 us-central1-a True True False
my-workload-cluster-2 1.25 us-central1 us-central1-b True True False

We can use my-workload-cluster-1 & my-workload-cluster-2 in the annotation to specify where we want our workload to run. If your workload clusters are named differently, you will need to change the workload cluster name in the annotation.

Let's save following manifest to the file:

  cat <<EOF > nginx-annotation-based.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-annotation-based
labels:
app: nginx-annotation-based
annotations:
nova.elotl.co/cluster: my-workload-cluster-1 # if your workload cluster is named differently, change this value
spec:
replicas: 2
selector:
matchLabels:
app: nginx-annotation-based
template:
metadata:
labels:
app: nginx-annotation-based
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF

Now, we can create it in the Nova Control Plane:

kubectl --context=nova create -f nginx-annotation-based.yaml

Nova will schedule nginx app to the workload cluster specified in the nova.elotl.co/cluster label and sync the status of the deployment back to the Nova Control Plane. You can observe deployment replicas becoming available by running:

kubectl --context=nova get deployments --watch

After some time, you should see nginx deployment having 2/2 replicas available. Note that there will be no pod running in the nova control plane cluster - kubectl --context=nova get pods should show no pod. This is because Nova by default does not sync "child" workloads (e.g. replicasets for deployments, pods for replicasets) to the Nova Control Plane.

Updating / Deleting Workloads

Once the workload is scheduled by Nova, you can modify it and delete it in the Nova Control Plane, and it will be applied in the workload cluster.

Let's try to scale out our nginx-annotation-based deployment in the Nova Control Plane:

kubectl --context=nova scale deployment nginx-annotation-based --current-replicas=2 --replicas=3

Run

kubectl --context=nova get deployments --watch

and after a moment you should be able to see 3 replicas of nginx-annotation-based deployment running.

Deleting a workload in nova will result in the workload deleted from the workload cluster too:

kubectl --context=nova delete -f nginx-annotation-based.yaml

You should be able to see the nginx deployment deleted both from nova control plane and your workload cluster.

Policy-based scheduling

Another way to specify scheduling is through Nova's SchedulePolicy CRD. This enables user to be more specific about which workloads should be run in which workload clusters. SchedulePolicy offers a resource label selector to match multiple workloads, namespace selector to match workloads from more than one namespace and cluster selector to pick one or more workload cluster, based on their names or other properties. You can check SchedulePolicy reference for full description of configuration options. To learn about more high-level concepts of scheduling modes, you can check a Concepts section, especially Group Scheduling, Policy Based Scheduling and Spread Scheduling.

First, let's check which workload clusters are connected to Nova and ready.

kubectl --context=nova get clusters
NAME                    K8S-VERSION   K8S-CLUSTER   REGION      ZONE          READY   IDLE   STANDBY
my-workload-cluster-1 1.25 us-central1 us-central1-a True True False
my-workload-cluster-2 1.25 us-central1 us-central1-b True True False

Workload cluster names: my-workload-cluster-1 and my-workload-cluster-2 can be used in SchedulePolicy to specify target cluster. Your workload clusters may be different, depending on how you named your workload cluster during the Nova agent installation process.

We will use a following application example, made of two redis deployments and services and a frontend app deployment and service.

  cat <<EOF > guestbook-all-in-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-leader
labels:
component: redis
app: guestbook
role: leader
tier: backend
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
role: leader
tier: backend
spec:
containers:
- name: leader
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: leader
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-follower
labels:
app: guestbook
component: redis
role: follower
tier: backend
spec:
replicas: 2
selector:
matchLabels:
component: redis-follower
template:
metadata:
labels:
component: redis-follower
role: follower
tier: backend
spec:
containers:
- name: follower
image: gcr.io/google_samples/gb-redis-follower:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-follower
labels:
app: guestbook
component: redis-follower
role: follower
tier: backend
spec:
ports:
# the port that this service should serve on
- port: 6379
selector:
component: redis-follower
role: follower
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
component: frontend
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
component: frontend
tier: frontend
template:
metadata:
labels:
component: frontend
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v5
env:
- name: GET_HOSTS_FROM
value: "dns"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
component: frontend
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
component: frontend
tier: frontend
EOF

as you may notice, each resource in the manifest shares a common label: app=guestbook. We will use it in our SchedulePolicy manifest:

  cat <<EOF > schedule-policy.yaml
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: app-guestbook
spec:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default # we will create guestbook workloads in the default namespace
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: my-workload-cluster-1 # if your workload cluster is named differently, change this value
groupBy:
labelKey: app # this ensures that all workloads will be moved together between worklaod clusters
resourceSelectors:
labelSelectors:
- matchLabels:
app: guestbook # this is a label key and value which is present on each object in the guestbook-all-in-one.yaml manifest
EOF

Now, let's create a SchedulePolicy first in the Nova Control Plane:

kubectl --context=nova create -f schedule-policy.yaml

The next step is creating guestbook app in the Nova Control Plane:

kubectl --context=nova create -f guestbook-all-in-one.yaml

Let's wait a bit until all deployments are ready and available:

kubectl --context=nova get deployments --watch

Once everything is up and running you should see following output:

kubectl --context=nova get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
frontend 3/3 3 3 46s
redis-follower 2/2 2 2 46s
redis-leader 1/1 1 1 46s

Shortly, external-ip for frontend service will become available:

kubectl --context=nova get service frontend
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
frontend LoadBalancer 10.96.214.222 172.18.255.201 80:31018/TCP 76s

Hint: if the External-IP field shows pending for a long time it may mean that you don't have an Ingress controller in the cluster. Please refer to Prerequisites section in this tutorial.

The external-ip of the frontend service should lead you to the main page of the guestbook application.

Workload migration

Now let's say your my-workload-cluster-1 will go through some maintenance and you want to migrate your guestbook application to my-workload-cluster-2. You can achieve this by editing the schedulePolicy:

  cat <<EOF > schedule-policy-updated.yaml
apiVersion: policy.elotl.co/v1alpha1
kind: SchedulePolicy
metadata:
name: app-guestbook
spec:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default # we will create guestbook workloads in the default namespace
clusterSelector:
matchLabels:
kubernetes.io/metadata.name: my-workload-cluster-2 # we change it from my-workload-cluster-1
groupBy:
labelKey: app # this ensures that all workloads will be moved together between worklaod clusters
resourceSelectors:
labelSelectors:
- matchLabels:
app: guestbook # this is a label key and value which is present on each object in the guestbook-all-in-one.yaml manifest
EOF

and applying the updated version to the Nova Control Plane:

kubectl --context=nova apply -f schedule-policy-updated.yaml

After a few moments, you should be able to see your workloads deleted from my-workload-cluster-1 and recreated in my-workload-cluster-2. Watching the migration of the workloads looks like this from the Nova Control Plane perspective:

kubectl --context=nova get deployments --watch
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
frontend 0/3 3 0 2m31s
redis-follower 1/2 2 1 2m31s
redis-leader 1/1 1 1 2m31s
redis-follower 2/2 2 2 2m32s
frontend 1/3 3 1 2m52s
frontend 2/3 3 2 2m54s
frontend 3/3 3 3 2m54s

To verify that pods migrated successfully, run:

kubectl --context=[workload-cluster-1-kube-context or workload-cluster-2-kube-context] get pods

Pods created for deployments frontend, redis-leader and redis-follower would be present only in the second kube context (which is your workload cluster 2 kube context).

Cleaning up

To delete all the resources used in this tutorial, please delete your workloads first:

kubectl --context=nova delete -f guestbook-all-in-one.yaml

and wait a bit. Nova will remove them from your workload cluster. Once they are deleted from the workload cluster, you can safely delete schedule policy:

kubectl --context=nova delete -f schedule-policy-updated.yaml