Skip to main content
Version: v1.0

Installation (Advanced)

Overview

Purpose

This guide provides step-by-step instructions for installing Nova, a control plane and agent system designed to manage multiple Kubernetes clusters. By following this guide, you will set up the Nova Control Plane on a hosting Kubernetes cluster and deploy Nova Agents to workload clusters.

Scope

This guide covers:

  • Prerequisites: Requirements before installing Nova.
  • Installing novactl: How to download and set up the Nova CLI.
  • Deploying Nova: Instructions for deploying the Nova Control Plane and Agents.
  • Post-Installation Checks: Verifying the installation.
  • Uninstalling Nova: Steps to remove Nova if needed.

Key Concepts

  • Nova Control Plane: The central management unit running on a hosting Kubernetes cluster.
  • Nova Agent: The component deployed to each workload cluster for management.
  • novactl: The command-line interface (CLI) for installing, uninstalling and checking the status of a Nova deployment.
  • Workload Cluster: A Kubernetes cluster managed by the Nova Control Plane.
  • Hosting Cluster: A Kubernetes cluster where the Nova Control Plane runs.

Prerequisites

  1. At least 2 Kubernetes clusters up and running. One cluster will be the hosting cluster where nova control plane runs. The other clusters will be workload clusters that are managed by the nova control plane.
  2. Installed and configured kubectl
  3. Nova cannot be deployed to an Autopilot GKE cluster. Please validate that you are deploying to a non-Autopilot cluster.
  4. Cluster hosting Nova Control Plane MUST have storage provisioner and default StorageClass configured. Nova Control Plane uses [etcd] as a backing store, which runs as StatefulSet and requires PersistentVolume to work.
  5. Cluster hosting Nova Control Plane MUST have ingress controller configured. Nova API Server is exposed as a LoadBalancer type Kubernetes Service and it needs to get an IP address or domain that will be reachable from the Nova Agent in the workload cluster, as well as human users interacting with Nova Control Plane.

Kubernetes compatibility

Nova VersionKubernetes Versions Supported
v0.10v1.29, v1.30, v1.31
v0.9v1.26, v1.27, v1.28, v1.29
v0.8v1.26, v1.27, v1.28, v1.29
v0.7v1.25, v1.26, v1.27, v1.28
v0.6v1.24, v1.25

Installation steps

  1. Download and install novactl
  2. Prepare hosting cluster
  3. Create API Server load balancer service.
  4. Create certificates for Nova Control Plane
  5. Install Control Plane Components
  6. Get Nova Control Plane kubeconfig
  7. Install Nova CRDs
  8. Verify your installation
  9. Connect workload clusters

Download novactl

novactl is our CLI that allows you to easily create new Nova Control Planes, register new Nova Workload Clusters, check the health of your Nova cluster, and more!

If you don't have the release tarball then in order to download the latest novactl version for your OS, run:

curl -s https://api.github.com/repos/elotl/novactl/releases/latest | \
jq -r '.assets[].browser_download_url' | \
grep "$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m | sed 's/x86_64/amd64/;s/i386/386/;s/aarch64/arm64/')" | \
xargs -I {} curl -L {} -o novactl

Install novactl

Make the binary executable

Once you have the binary, run:

chmod +x novactl*

Place the binary in your PATH

The following is an example to install the plugin in /usr/local/bin for Unix-like operating systems:

sudo mv novactl* /usr/local/bin/novactl

If you accidentally downloaded more than one novactl binary, please move only the binary that corresponds to the OS and ARCH of your machine to the /usr/local/bin location.

Install it as kubectl plugin

novactl is ready to work as kubectl plugin. Our docs assume you're using novactl as kubectl plugin. To make this work, simply run the following (note that you may need to include the full path to your novactl binary):

sudo novactl kubectl-install

And test if it works:

kubectl nova --version
kubectl-nova version v0.9.0 (git: 58407116) built: 20240312092623

Upgrading novactl

If you want to upgrade novactl to latest version, it's enough to run all previous steps starting from Download novactl up to this point again. This will automatically download latest version and replace your local binary with it.

Preparing hosting cluster

Cluster hosting Nova Control Plane MUST have storage provisioner and default StorageClass configured. Nova Control Plane uses [etcd] as a backing store, which runs as StatefulSet and requires PersistentVolume to work. Cluster hosting Nova Control Plane MUST have ingress controller configured.. Nova API Server is exposed as a LoadBalancer type Kubernetes Service and it needs to get an IP address or domain that will be reachable from the Nova Agent in the workload cluster, as well as human users interacting with Nova Control Plane.

Creating API Server

By default, we will install everything in the elotl namespace, you can modify it if you need. Please export the name of the hosting cluster kube context.

export NOVA_NAMESPACE=elotl
export K8S_HOSTING_CLUSTER_CONTEXT=k8s-cluster-hosting-cp
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} create namespace ${NOVA_NAMESPACE}
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} create -f install/base/control-plane/apiserver.yaml -n ${NOVA_NAMESPACE}

then, we need to wait for external IP or hostname (depending on your cloud provider) being allocated. If it's still Pending after a few minutes, you should check whether your cluster has a working Ingress controller as mentioned in the hosting cluster Prerequisites.

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait -n ${NOVA_NAMESPACE} service/apiserver --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' --timeout=360s

If this command fails, it's very likely that API Server got hostname (instead of external IP) allocated. You can check it using following command:

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait -n elotl service/apiserver --for=jsonpath='{.status.loadBalancer.ingress[0].hostname}' --timeout=360s
Configure domain name record for API Server service

We strongly recommend configuring a domain name record or at least external static IP address for this Service.

Store this manifest in a version control system (e.g. git repository)

We strongly recommend storing applied manifest in version control, as it will make upgrades easier.

This API Server service will be an entrypoint to the Nova Control Plane, for workload clusters, as well as human users.

Understand the impact before deleting this Service

Make sure that you know what you are doing before deleting this Service. Without API Server being exposed to the workload clusters and users, Nova Control Plane is not usable. You should delete it only if you don't intend to use Nova anymore.

Generating certificates for Nova Control Plane

Nova Control Plane runs similar components as a regular Kubernetes cluster, e.g.: apiserver, kube-controller-manager, nova-scheduler and key-value store. To secure communications between components, Nova Control Plane needs to create a set of certificates, similar to the kubeadm init certs phase. These certificates are then mounted from kubernetes secrets to the control plane components.

We provide kubectl nova subcommand to generate certificates and the secret manifests in correct format. We will provide the instructions how to do this using kubeadm in the future.

For now, you can use install certs subcommand. The command needs API Server IP as an input and the namespace where API Server will be installed (default is elotl).

apiserver_ip=$(kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} get -n ${NOVA_NAMESPACE} service/apiserver -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
kubectl nova --context=${K8S_HOSTING_CLUSTER_CONTEXT} install certs --apiserver-public-endpoint="${apiserver_ip}" --namespace=${NOVA_NAMESPACE} --nova-node-ip="${apiserver_ip}" > "${PWD}/nova_certificates.yaml"

If this command fails, it's likely that your API Server service got exposed with hostname, not external IP. Then, use this command instead:

apiserver_ip=$(kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} get -n ${NOVA_NAMESPACE} service/apiserver -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
test -n "${apiserver_ip}"
kubectl nova --context=${K8S_HOSTING_CLUSTER_CONTEXT} install certs --apiserver-public-endpoint="${apiserver_ip}" --namespace=${NOVA_NAMESPACE} --nova-node-ip="${apiserver_ip}" > "${PWD}/nova_certificates.yaml"

Then, we can create those Secrets in the hosting cluster:

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} create -f "${PWD}/nova_certificates.yaml"

By default, these certificates expire in 10 years from the generation. They can be rotated by re-generating the secrets, applying them to the hosting cluster and restarting Nova Control Plane components.

Install Control Plane Components

Open install/base/control-plane/kustomization.yaml in text editor and set namespace: to the namespace you chose and exported as $NOVA_NAMESPACE. You can also do it using sed:

sed "s/namespace: elotl/namespace: ${NOVA_NAMESPACE}/" install/base/control-plane/kustomization.yaml > temp_file && mv temp_file install/base/control-plane/kustomization.yaml

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} create -k install/base/control-plane/
Store this manifest in a version control system (e.g. git repository)

We strongly recommend storing applied manifest in version control, as it will make upgrades easier.

It might take a while, but eventually apiserver, kube-controller-manager, nova-scheduler and etcd should get ready and available:

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait pod --for=jsonpath='{.status.phase}'=Running -n ${NOVA_NAMESPACE} -l app=etcd --timeout=360s
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait pod --for=jsonpath='{.status.phase}'=Running -n ${NOVA_NAMESPACE} -l component=apiserver --timeout=180s
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait pod --for=jsonpath='{.status.phase}'=Running -n ${NOVA_NAMESPACE} -l component=controller-manager --timeout=180s
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait pod --for=jsonpath='{.status.phase}'=Running -n ${NOVA_NAMESPACE} -l component=nova-scheduler --timeout=180s

If all these conditions are met, we can proceed with Control Plane configuration.

Troubleshooting error: timed out waiting for the condition on pods/etcd-0

ETCD pods will not start if the storage provisioner is not configured for a hosting cluster (as mentioned in the Preparing hosting cluster section above). You can also take a look at troubleshooting section for this issue

Note that if you would like to enable Nova JIT, you need to set the nova-scheduler deployment env variable NOVA_IDLE_ENTER_STANDBY_ENABLE, along with other env variables for JIT options as needed (NOVA_IDLE_ENTER_STANDBY_SECS, NOVA_DELETE_CLUSTER_IN_STANDBY, NOVA_CREATE_CLUSTER_IF_NEEDED, NOVA_MAX_CREATED_CLUSTERS). Also, you need to set the credentials for JIT operations on your target cloud (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY or GCE_ACCESS_KEY, GCE_PROJECT_ID or AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID). And if you are using delete/recreate standby mode, you need to set AGENT_IMAGE_REPO, AGENT_IMAGE_TAG for the version of Nova you are using. You can set the needed env variables by modifying the nova-scheduler deployment in install/base/control-plane/nova_cp.yaml prior to its installation or by kubectl-editing the nova-scheduler deployment after its installation.

Get Nova KubeConfig

To get Nova kubeconfig, we need to be sure that Nova API Server Endpoint is responding. Nova Control Plane will create a "NovaAPIServerEndpointReady" event once it's ready.

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait -n ${NOVA_NAMESPACE} '--for=jsonpath={.reason}'=NovaAPIServerEndpointReady event/NovaAPIServerEndpointReady --timeout=180s

We will use NovaCLI as kubectl plugin to generate kubeconfig.

kubectl nova --context=${K8S_HOSTING_CLUSTER_CONTEXT} get kubeconfig -n ${NOVA_NAMESPACE} > "${PWD}/nova_kubeconfig.yaml"
cat "${PWD}/nova_kubeconfig.yaml"

you can examine the contents of the file. It should look similar to:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: "LS0tLS1...LS0K"
server: "https://172...200"
name: nova
contexts:
- context:
cluster: nova
user: nova-admin
name: nova
current-context: nova
kind: Config
preferences: {}
users:
- name: nova-admin
user:
client-certificate-data: "LS0tLS1CRUd...LS0tLQo="
client-key-data: "LS0tLS1CRUdJ...tLS0tLQo="

Install Nova CRDs

Once we can talk to the Nova API Server, we can install Nova CRDs. These are needed for proper Nova functioning. Note that the following assumes the KUBECONFIG environment variable is currently set to the path to your kubectl config path (e.g., ~/.kube/config); if unset, please set KUBECONFIG accordingly before running these commands.

export KUBECONFIG=$KUBECONFIG:${PWD}/nova_kubeconfig.yaml
kubectl config get-contexts
export NOVA_CONTROLPLANE_CONTEXT=nova
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} create -f install/base/crds/nova_crds.yaml
Store this manifest in a version control system (e.g. git repository)

We strongly recommend storing applied manifest in version control, as it will make upgrades easier.

Nova Control Plane component will run a set of checks at the startup, and once they all pass, it will create NovaControlPlaneReady event. You can use kubectl to wait for this event:

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} wait -n ${NOVA_NAMESPACE} '--for=jsonpath={.reason}'=NovaControlPlaneReady event/NovaControlPlaneReady --timeout=180s

At this point, we have Nova Control Plane up and running. To verify it, we can check the status of the Nova Control Plane:

kubectl nova --context=${NOVA_CONTROLPLANE_CONTEXT} --hosting-cluster-context=${K8S_HOSTING_CLUSTER_CONTEXT} --hosting-cluster-nova-namespace=${NOVA_NAMESPACE} status
Checking status of Nova Control Plane Components

* API Server status... Running √
* Kube-controller-manager status... Running √
* ETCD status... Running √
* Nova scheduler status... Running √
Nova Control Plane is healthy √

Checking presence of Nova Custom Resource Definitions

* Cluster CRD presence... installed √
0 workload clusters connected ‼
please connect at least one Cluster, otherwise Nova does not have a target cluster to run your workloads. Connecting clusters can be done by running novactl install agent <cluster-name> in correct Kube context.
* SchedulePolicy CRD presence... installed √
* 0 SchedulePolicies defined ‼
please create at least one SchedulePolicy, otherwise Nova does not know where to run your workloads. SchedulePolicy spec: https://docs.elotl.co/nova/intro
* ScheduleGroup CRD presence... installed √
All Nova Custom Resource Definitions installed √

To schedule any workloads via Nova we need to connect workload clusters. Let's do it.

Connect workload clusters

Install nova agent into workload cluster

Each workload cluster needs a Nova agent. The Nova agent will be deployed by default to the elotl namespace. Before deploying Nova agent, you need to ensure that the Nova's init-kubeconfig is present in the elotl namespace. Nova's init-kubeconfig provides a kube config to the Nova Control Plane. This kube config is used by Nova agent in the workload cluster to connect and register itself as a workload cluster in the Nova Control Plane.

export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export K8S_CLUSTER_CONTEXT_1=k8s-cluster-1
export K8S_CLUSTER_CONTEXT_2=k8s-cluster-2

Let's create the namespace first:

kubectl --context=${K8S_CLUSTER_CONTEXT_1} create namespace ${NOVA_NAMESPACE}

and copy init-kubeconfig from Nova Control Plane to workload cluster:

kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get secret -n ${NOVA_NAMESPACE} nova-cluster-init-kubeconfig -o yaml | kubectl --context=${K8S_CLUSTER_CONTEXT_1} apply -f -

To connect a workload cluster to Nova, we will use kubectl apply with the kustomize overlay. By default, we ship an overlay for two workload clusters, named wlc-1 and wlc-2, respectively in install/overlays/. To name your workload cluster differently, you can modify the contents of the install/overlays/workload-cluster-1/nova_agent.yaml or install/overlays/workload-cluster-2/nova_agent.yaml (or copy the entire directory and rename it). In nova_agent.yaml, you need to change the line --cluster-name=wlc-2 and replace wlc-2/wlc-1 with your desired workload cluster name.

Open install/base/agent/kustomization.yaml in text editor and set namespace: to the namespace you chose and exported as $NOVA_NAMESPACE. You can also do it using sed:

sed "s/namespace: elotl/namespace: ${NOVA_NAMESPACE}/" install/base/agent/kustomization.yaml > temp_file && mv temp_file install/base/agent/kustomization.yaml

Next step is creating the agent in the workload cluster context:

kubectl --context=${K8S_CLUSTER_CONTEXT_1} apply -k install/overlays/workload-cluster-1
Store this manifest in a version control system (e.g. git repository)

We strongly recommend storing applied manifest in version control, as it will make upgrades easier.

Now lets check if that worked! Simply run:

kubectl get --context=${NOVA_CONTROLPLANE_CONTEXT} clusters

Remember to update path to your Nova Control Plane kubeconfig

NAME    K8S-VERSION   K8S-CLUSTER   REGION   ZONE   READY   IDLE   STANDBY
wlc-1 1.28 wlc-1 True True False

What if I don't see my workload cluster listed?

If agent install finished without issues and your cluster is not showing up in Nova Control Plane, something went wrong during agent registration process. Run the following command to get agent logs:

kubectl logs --context ${K8S_CLUSTER_CONTEXT_1} -n ${NOVA_NAMESPACE} deployment/nova-agent

And start debugging from there!

Install other workload clusters

If you have a second cluster, run the same commands with a different cluster and cluster name, e.g.,

kubectl --context=${K8S_CLUSTER_CONTEXT_2} create namespace ${NOVA_NAMESPACE}
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get secret -n ${NOVA_NAMESPACE} nova-cluster-init-kubeconfig -o yaml | kubectl --context=${K8S_CLUSTER_CONTEXT_2} apply -f -
kubectl --context=${K8S_CLUSTER_CONTEXT_2} apply -k install/overlays/workload-cluster-2
Store this manifest in a version control system (e.g. git repository)

We strongly recommend storing applied manifest in version control, as it will make upgrades easier.

Verify your installation

You can use novactl status subcommand to examine the state of the Nova Control Plane:

kubectl nova --context=${NOVA_CONTROLPLANE_CONTEXT} --hosting-cluster-context=${K8S_HOSTING_CLUSTER_CONTEXT} --hosting-cluster-nova-namespace=${NOVA_NAMESPACE} status
Checking status of Nova Control Plane Components

* API Server status... Running √
* Kube-controller-manager status... Running √
* ETCD status... Running √
* Nova scheduler status... Running √
Nova Control Plane is healthy √

Checking presence of Nova Custom Resource Definitions

* Cluster CRD presence... installed √
* Cluster wlc-1 connected and ready √
* Cluster wlc-2 connected and ready √
* SchedulePolicy CRD presence... installed √
* 0 SchedulePolicies defined ‼
please create at least one SchedulePolicy, otherwise Nova does not know where to run your workloads. SchedulePolicy spec: https://docs.elotl.co/nova/intro
* ScheduleGroup CRD presence... installed √
All Nova Custom Resource Definitions installed √

Uninstalling Nova

Uninstalling Nova Agent

Uninstalling the Nova agent from the workload cluster is as simple as deleting the agent resources we created in the installation steps:

kubectl --context=${K8S_CLUSTER_CONTEXT_1} delete -k install/overlays/workload-cluster-1
kubectl --context=${K8S_CLUSTER_CONTEXT_1} delete secret -n ${NOVA_NAMESPACE} nova-cluster-init-kubeconfig
kubectl --context=${K8S_CLUSTER_CONTEXT_1} delete ns ${NOVA_NAMESPACE}
kubectl --context=${K8S_CLUSTER_CONTEXT_2} delete -k install/overlays/workload-cluster-2
kubectl --context=${K8S_CLUSTER_CONTEXT_2} delete secret -n ${NOVA_NAMESPACE} nova-cluster-init-kubeconfig
kubectl --context=${K8S_CLUSTER_CONTEXT_2} delete ns ${NOVA_NAMESPACE}

If you want to remove the associated clusters from the Nova Control Plane, run the following, with workload-cluster-1 and workload-cluster-2 set to the names of your workload clusters:

kubectl delete --context=${NOVA_CONTROLPLANE_CONTEXT} clusters workload-cluster-1
kubectl delete --context=${NOVA_CONTROLPLANE_CONTEXT} clusters workload-cluster-2

Uninstalling Nova Control Plane

Uninstalling the Nova Control Plane from the hosting cluster is as simple as deleting the control plane resources we created in the installation steps:

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} delete -k install/base/control-plane/

Removing Nova API Server Service

Understand the impact before deleting this Service

Make sure that you know what you are doing before deleting this Service. Without API Server being exposed to the workload clusters and users, Nova Control Plane is not usable. You should delete it only if you don't intend to use Nova anymore.

kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} delete -f "${PWD}/nova_certificates.yaml"
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} delete -f install/base/control-plane/apiserver.yaml -n ${NOVA_NAMESPACE}
kubectl --context=${K8S_HOSTING_CLUSTER_CONTEXT} delete namespace ${NOVA_NAMESPACE}