Skip to main content
Version: v1.4

Scalability

To demonstrate Nova Control Plane's scalability, we perform stress tests measuring the Control Plane's (API Server, etcd, Scheduler, Controller Manager) resource usage across varying fleet sizes.

Test Environment

  • Infrastructure: Nova control plane deployed on a dedicated Kubernetes hosting cluster. The hosting cluster provides the API server, etcd, scheduler, and controller manager components that are monitored during the tests.
  • Workload Clusters: Simulated using vclusters on a dedicated worker node on a separate cluster. This approach allows us to scale the number of virtual clusters without requiring physical cluster infrastructure for each test scenario.
  • Workload: Retail Store Application (Microservices) distributed via Nova Spread Duplicate Policy. The workload consists of:
    • ServiceAccount: 5
    • Secret: 4
    • ConfigMap: 5
    • Service: 10
    • Deployment: 7 (1 pod each, all with replicas: 1)
    • StatefulSet: 3 (1 pod each, all with replicas: 1)
  • Measurement: Resource consumption was measured using kubectl top pod -n elotl --sum=true command in the hosting cluster after scaling to each cluster count.

Scenario 1: Idle Fleet Connectivity

This test measures the resource footprint of the Nova Control Plane while maintaining connections to idle workload clusters.

Active ClustersCPU Usage (cores)Memory Usage (bytes)
1081m358Mi
100120m386Mi
500118m432Mi

Scenario 2: Active Workload Orchestration

This test evaluates Nova's resource usage while actively managing an application across the fleet. We used a Spread Duplicate Policy to deploy one instance of the Retail Store Application to every cluster.

Active ClustersCPU Usage (cores)Memory Usage (bytes)
1071m486Mi
20134m588Mi
100557m1100Mi
2001516m1689Mi