EKS
Luna EKS Testing Tutorials
A few very basic test examples can be used to validate the functionality and operation of Luna.
ML Use-Cases /w NVIDIA gpu
By default, Luna will autoscale pods with the label elotl-luna=true. Kindly execute the following command kubectl apply -f
followed by the provided YAML file.
You will observe a pod running and completing its execution on a p3 instance equipped with an NVIDIA V100 GPU.
Upon completion of the pod, the corresponding node will be automatically terminated.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: gpu-test
annotations:
"node.elotl.co/instance-gpu-skus": "V100"
labels:
elotl-luna: "true"
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda12.5.0"
resources:
limits:
nvidia.com/gpu: 1
EOF
You can monitor the creation of the new pod and node by running the command watch kubectl get pods,nodes
. The test pod will only be active for a brief period after it starts. The GPU node that was added to support the pod will persist for a few more minutes.
To confirm the presence of a GPU on the node, you can run the kubectl describe node
command and look for the "nvidia.com/gpu" entry or alternatively, you can run the following command:
kubectl get nodes "-o=custom-columns=NAME:.metadata.name,KUBELET STATUS:.status.conditions[3].reason,CREATED:metadata.creationTimestamp,VERSION:.status.nodeInfo.kubeletVersion,NVIDIA GPU(s):.status.allocatable.nvidia\.com/gpu"
General Testing (non-ML)
Luna will attempt to consolidate multiple smaller pods onto a newly deployed node, or in the case of larger pods, it will allocate a dedicated node for the pod, as occurs with the bin-selection packing mode.
You can perform simple testing using busybox or other pods of varying sizes. The following YAML files can be utilized, and the number of pods can be adjusted to observe Luna's dynamic response.
Small busybox deployment
Several busybox pods will be co-located within a single node
busybox
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 6
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
elotl-luna: "true"
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi
command:
- sleep
- "infinity"
EOF
Larger busybox deployment
A single busybox pod will hit the threshold for bin-selection and be located on it's own node
busybox-large
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-large
spec:
replicas: 1
selector:
matchLabels:
app: busybox-large
template:
metadata:
labels:
app: busybox-large
elotl-luna: "true"
spec:
containers:
- name: busybox-large
image: busybox
resources:
requests:
cpu: 4
memory: 256Mi
limits:
cpu: 6
memory: 512Mi
command:
- sleep
- "infinity"
EOF
Check that the pods have started, the -o wide opton will show which does the pods are running on:
kubectl get pods -l elotl-luna=true -o wide
Sample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-65cb45c86b-h9mz8 1/1 Running 0 3m40s 192.168.179.106 ip-192-168-180-235.us-east-2.compute.internal <none> <none>
busybox-65cb45c86b-ttgxz 1/1 Running 0 3m40s 192.168.180.96 ip-192-168-180-235.us-east-2.compute.internal <none> <none>
busybox-large-57f654fff8-nf4tj 1/1 Running 0 3m32s 192.168.181.40 ip-192-168-177-117.us-east-2.compute.internal <none> <none>
Next, we can verify the node information to confirm which instance types were selected and added to the Kubernetes cluster by Luna.
kubectl get nodes "-o=custom-columns=NAME:.metadata.name,KUBELET STATUS:.status.conditions[3].reason,CREATED:.metadata.creationTimestamp,VERSION:.status.nodeInfo.kubeletVersion,INSTANCE TYPE:.metadata.labels.node\.kubernetes\.io/instance-type,CPU(S):.status.capacity.cpu,MEMORY:.status.capacity.memory" --sort-by=metadata.creationTimestamp
Sample output
NAME KUBELET STATUS CREATED VERSION INSTANCE TYPE CPU(S) MEMORY
ip-192-168-62-44.us-east-2.compute.internal KubeletReady 2023-02-08T19:52:13Z v1.24.9-eks-49d8fe8 t3.xlarge 4 16202984Ki
ip-192-168-180-235.us-east-2.compute.internal KubeletReady 2023-02-10T16:40:11Z v1.24.9-eks-49d8fe8 t3a.xlarge 4 16248040Ki
ip-192-168-177-117.us-east-2.compute.internal KubeletReady 2023-02-10T16:40:26Z v1.24.9-eks-49d8fe8 t3a.2xlarge 8 32608568Ki
Zone Affinity and Spread Testing
Luna running on an EKS cluster supports kube-scheduler pod placement that includes zone spread or zone affinity. Luna recognizes zone spread expressed in the pod spec topologySpreadConstraints field as topologyKey set to topology.kubernetes.io/zone, and zone affinity expressed in the pod spec nodeAffinity field as the topology.kubernetes.io/zone in a zone value set. If Luna's placeNodeSelector option is set true, Luna also recognizes zone affinity expressed as a topology.kubernetes.io/zone nodeSelector. If Luna's placeBoundPVC option is set true, Luna also recognizes zone affinity if the pod has a persistent volume claim bound to a storage class or persistent volume with zone affinity. Currently Luna does not support both zone spread and zone affinity on the same pod; Luna webhook reports an error and skips the pod in this case.
If zone spread support is enabled for bin packing on EKS (option aws.binPackingZoneSpread=true), Luna creates at least one bin pack node in each of the cluster's zones, so that kube-scheduler has visibility into the full set of available zones. Then whenever Luna sees pending bin pack pod(s) with zone spread, it scales up the node count in each of the cluster's zones, to ensure that kube-scheduler can find needed zone-specific resources. Luna will scale down any unused nodes once the pending pods have been placed. When Luna sees pending bin pack pod(s) with zone affinity, it scales up the node count in an affine zone. And when Luna sees pending bin pack pod(s) with neither zone spread nor zone affinity, it scales up the node count in a single chosen zone.
Luna bin selection support for zone spread and affinity works similarly. By default (option reuseBinSelectNodes=true) Luna bin selection groups pods for node selection based on factors relevant to choosing a node type, including various pod spec fields and Luna annotations. For a pending pod in a group that includes zone spread, Luna scales up the node count in each of its zones. For a pending pod in a group that includes zone affinity, Luna scales up the node count in an affine zone, and for a pending pod in a group that includes neither zone spread nor zone affinity, it scales up the node count in a single chosen zone.
Examples are given below.
Small busybox zone affinity deployment
This small busybox pod will use Luna bin packing mode and is affine to the us-west-2d zone in a us-west-2 EKS cluster.
busybox-small-zone-affinity
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-small-zone-affinity
spec:
replicas: 1
selector:
matchLabels:
app: busybox-small-zone-affinity
template:
metadata:
labels:
app: busybox-small-zone-affinity
elotl-luna: "true"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west-2d
containers:
- name: busybox-small-zone-affinity
image: busybox
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi
command:
- sleep
- "infinity"
EOF
You can see the node running the zone affine pod:
kubectl get pods -l elotl-luna=true -l app=busybox-small-zone-affinity -o wide | awk {'print $1" " $7'} | column -t
NAME NODE
busybox-small-zone-affinity-65f957cd4-76hhq ip-192-168-10-108.us-west-2.compute.internal
And check the zone associated with that node to see it is us-west-2d:
kubectl get nodes -Ltopology.kubernetes.io/zone | awk {'print $1" " $6'} | grep gke-anne-regional-c2d-highcpu-4-e2c00890-kgwd
ip-192-168-10-108.us-west-2.compute.internal us-west-2d
Large busybox zone affinity deployment
This large busybox pod will use Luna bin selection mode and is affine to the us-west-2d zone in a us-west-2 EKS cluster.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-affinity
spec:
replicas: 1
selector:
matchLabels:
app: busybox-affinity
template:
metadata:
labels:
app: busybox-affinity
elotl-luna: "true"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west-2d
containers:
- name: busybox-affinity
image: busybox
resources:
requests:
cpu: 4
memory: 256Mi
limits:
cpu: 6
memory: 512Mi
command:
- sleep
- "infinity"
EOF
As in the previous example, you can see which node is running the zone affinity pod and can check that it is in the correct zone.
Small busybox zone spread deployment
These 3 small busybox pods will use Luna bin packing mode and be spread across the 3 zones in an EKS cluster.
busybox-small-zone-spread
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-small-zone-spread
spec:
replicas: 3
selector:
matchLabels:
app: busybox-small-zone-spread
template:
metadata:
labels:
app: busybox-small-zone-spread
elotl-luna: "true"
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: busybox-small-zone-spread
containers:
- name: busybox-small-zone-spread
image: busybox
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi
command:
- sleep
- "infinity"
EOF
You can see the nodes running the zone spread pods:
kubectl get pods -l elotl-luna=true -l app=busybox-small-zone-spread -o wide | awk {'print $1" " $7'} | column -t
NAME NODE
busybox-small-zone-spread-5bd574fbdc-5pkpd ip-192-168-149-31.us-west-2.compute.internal
busybox-small-zone-spread-5bd574fbdc-dwkz2 ip-192-168-25-137.us-west-2.compute.internal
busybox-small-zone-spread-5bd574fbdc-w5z6q ip-192-168-174-122.us-west-2.compute.internal
And can check the zones associated with those nodes to see the spread:
kubectl get nodes -Ltopology.kubernetes.io/zone | awk {'print $1" " $6'} | column -t
NAME ZONE
ip-192-168-149-31.us-west-2.compute.internal us-west-2a
ip-192-168-174-122.us-west-2.compute.internal us-west-2c
ip-192-168-25-137.us-west-2.compute.internal us-west-2d
...
Large busybox zone spread deployment
These 3 large busybox pods will use Luna bin selection mode and be spread across the zones in an EKS cluster.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-spread
spec:
replicas: 3
selector:
matchLabels:
app: busybox-spread
template:
metadata:
labels:
app: busybox-spread
elotl-luna: "true"
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: busybox-spread
containers:
- name: busybox-spread
image: busybox
resources:
requests:
cpu: 4
memory: 256Mi
limits:
cpu: 6
memory: 512Mi
command:
- sleep
- "infinity"
EOF
You can see the nodes running the zone spread pods:
kubectl get pods -l elotl-luna=true -l app=busybox-spread -o wide | awk {'print $1" " $7'} | column -t
NAME NODE
busybox-spread-77ccbd588-nb6h8 ip-192-168-171-139.us-west-2.compute.internal
busybox-spread-77ccbd588-vjr5t ip-192-168-21-145.us-west-2.compute.internal
busybox-spread-77ccbd588-zjr5z ip-192-168-139-212.us-west-2.compute.internal
And can check the zones associated with those nodes to see the spread:
kubectl get nodes -Ltopology.kubernetes.io/zone | awk {'print $1" " $6'} | column -t
NAME ZONE
ip-192-168-139-212.us-west-2.compute.internal us-west-2a
ip-192-168-171-139.us-west-2.compute.internal us-west-2c
ip-192-168-21-145.us-west-2.compute.internal us-west-2d
...