Scalability
To demonstrate Nova Control Plane's scalability, we perform stress tests measuring the Control Plane's (API Server, etcd, Scheduler, Controller Manager) resource usage across varying fleet sizes.
Test Environment
- Infrastructure: Nova control plane deployed on a dedicated Kubernetes hosting cluster. The hosting cluster provides the API server, etcd, scheduler, and controller manager components that are monitored during the tests.
- Workload Clusters: Simulated using
vclusterson a dedicated worker node on a separate cluster. This approach allows us to scale the number of virtual clusters without requiring physical cluster infrastructure for each test scenario. - Workload: Retail Store Application (Microservices) distributed via Nova Spread Duplicate Policy. The workload consists of:
- ServiceAccount: 5
- Secret: 4
- ConfigMap: 5
- Service: 10
- Deployment: 7 (1 pod each, all with replicas: 1)
- StatefulSet: 3 (1 pod each, all with replicas: 1)
- Measurement: Resource consumption was measured using
kubectl top pod -n elotl --sum=truecommand in the hosting cluster after scaling to each cluster count.
Scenario 1: Idle Fleet Connectivity
This test measures the resource footprint of the Nova Control Plane while maintaining connections to idle workload clusters.
| Active Clusters | CPU Usage (cores) | Memory Usage (bytes) |
|---|---|---|
| 10 | 81m | 358Mi |
| 100 | 120m | 386Mi |
| 500 | 118m | 432Mi |
Scenario 2: Active Workload Orchestration
This test evaluates Nova's resource usage while actively managing an application across the fleet. We used a Spread Duplicate Policy to deploy one instance of the Retail Store Application to every cluster.
| Active Clusters | CPU Usage (cores) | Memory Usage (bytes) |
|---|---|---|
| 10 | 71m | 486Mi |
| 20 | 134m | 588Mi |
| 100 | 557m | 1100Mi |
| 200 | 1516m | 1689Mi |