Skip to main content
Version: v1.1

OKE

Luna OKE Testing Tutorials

A few very basic test examples can be used to validate the functionally and operation of Luna.

ML Use-Cases /w NVIDIA gpu

By default, Luna will autoscale pods with the label elotl-luna=true. Kindly execute the following command kubectl apply -f followed by the provided YAML file. You will observe a pod running and completing its execution on a Standard_NC4as_T4_v3 instance (which has nvidia t4 GPU)..

Upon completion of the pod, the corresponding node will be automatically terminated.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: gpu-test
annotations:
node.elotl.co/instance-gpu-skus: "P100"
labels:
elotl-luna: "true"
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
EOF

You can monitor the creation of the new pod and node by running the command watch kubectl get pods,nodes. The test pod will only be active for a brief period after it starts. The GPU node that was added to support the pod will persist for a few more minutes.

To confirm the presence of a GPU on the node, you can run the kubectl describe node command and look for the "nvidia.com/gpu" entry or alternatively, you can run the following command:

kubectl get nodes "-o=custom-columns=NAME:.metadata.name,KUBELET STATUS:.status.conditions[3].reason,CREATED:metadata.creationTimestamp,VERSION:.status.nodeInfo.kubeletVersion,NVIDIA GPU(s):.status.allocatable.nvidia\.com/gpu"

General Testing (non-ML)

Luna will attempt to consolidate multiple smaller pods onto a newly deployed node, or in the case of larger pods, it will allocate a dedicated node for the pod, as occurs with the bin-selection packing mode.

You can perform simple testing using busybox or other pods of varying sizes. The following YAML files can be utilized, and the number of pods can be adjusted to observe Luna's dynamic response.

Small busybox deployment

Several busybox pods will be co-located within a single node

busybox

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 6
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
elotl-luna: "true"
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi
command:
- sleep
- "infinity"
EOF

Larger busybox deployment

A single busybox pod will hit the threshold for bin-selection and be located on its own node

busybox-large

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-large
spec:
replicas: 1
selector:
matchLabels:
app: busybox-large
template:
metadata:
labels:
app: busybox-large
elotl-luna: "true"
spec:
containers:
- name: busybox-large
image: busybox
resources:
requests:
cpu: 4
memory: 256Mi
limits:
cpu: 6
memory: 512Mi
command:
- sleep
- "infinity"
EOF

Check that the pods have started, the -o wide opton will show which does the pods are running on:

kubectl get pods -l elotl-luna=true -o wide

Sample output

NAME                            READY   STATUS    RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES
busybox-7f5bd44d8d-5c7n2 1/1 Running 0 6m31s 10.244.2.4 10.0.10.149 <none> <none>
busybox-7f5bd44d8d-7k84d 1/1 Running 0 6m31s 10.244.2.8 10.0.10.149 <none> <none>
busybox-7f5bd44d8d-9bvjz 1/1 Running 0 6m31s 10.244.2.5 10.0.10.149 <none> <none>
busybox-7f5bd44d8d-f2cj2 1/1 Running 0 6m31s 10.244.2.2 10.0.10.149 <none> <none>
busybox-7f5bd44d8d-nw7cl 1/1 Running 0 6m31s 10.244.2.7 10.0.10.149 <none> <none>
busybox-7f5bd44d8d-qv875 1/1 Running 0 6m31s 10.244.2.6 10.0.10.149 <none> <none>
busybox-large-6b6d67668-zbpfj 1/1 Running 0 5m43s 10.244.2.130 10.0.10.228 <none> <none>

Next, we can verify the node information to confirm which instance types were selected and added to the Kubernetes cluster by Luna.

kubectl get nodes "-o=custom-columns=NAME:.metadata.name,KUBELET STATUS:.status.conditions[3].reason,CREATED:.metadata.creationTimestamp,VERSION:.status.nodeInfo.kubeletVersion,INSTANCE TYPE:.metadata.labels.node\.kubernetes\.io/instance-type,CPU(S):.status.capacity.cpu,MEMORY:.status.capacity.memory" --sort-by=metadata.creationTimestamp

Sample output

NAME          KUBELET STATUS            CREATED                VERSION   INSTANCE TYPE         CPU(S)   MEMORY
10.0.10.131 KubeletHasSufficientPID 2022-08-03T16:52:36Z v1.23.4 VM.Standard.A1.Flex 2 32438784Ki
10.0.10.17 KubeletHasSufficientPID 2022-08-03T16:53:01Z v1.23.4 VM.Standard.A1.Flex 2 32438784Ki
10.0.10.149 KubeletHasSufficientPID 2023-04-19T15:48:22Z v1.23.4 VM.Standard.E3.Flex 4 8852212Ki
10.0.10.228 KubeletHasSufficientPID 2023-04-19T15:49:42Z v1.23.4 VM.Standard.E3.Flex 6 2743240Ki