Google GKE
This section details GCP specific configuration options.
Image Type
By default, Luna allows GCP to select the image type for nodes Luna adds to the cluster. The option gcp.imageType can be used to instead have Luna specify the image type for its added nodes. GCP's default image type and its valid image type values are available via the following command:
gcloud container get-server-config
For example, if you would like to set the image type for Luna nodes to UBUNTU_CONTAINERD, do this:
--set gcp.imageType=UBUNTU_CONTAINERD
Disk Type and size
The gcp.diskType
parameter specifies the type of disk to use on the nodes. The available options are pd-standard
, pd-ssd
, and pd-balanced
. By default pd-balanced
is used.
To set a specific disk type, use the following Helm parameter:
--set gcp.diskType=pd-ssd
It’s important to note that not all instance types are compatible with the pd-standard disk type. If Luna selects C3 or G2 machine series and gcp.diskType
is set to pd-standard
, the node creation process will fail.
The gcp.diskSizeGb
parameter defines the disk size in gigabytes. If the disk size is too small to accommodate the system, the node may fail to start or may not be functional after starting. The minimum disk size is 10 GB, and the default size is 100 GB.
To set a specific disk size, use the following Helm parameter:
--set gcp.diskSizeGb=200
Additional local SSD-backed raw block storage
Luna supports attaching additional raw block devices to its GKE nodes, allowing you to build node-level caches for the pods running on the nodes.
There are three mutually exclusive options for adding raw block devices:
- Legacy local SSD count
- Modern NVMe SSD count
- Ephemeral storage
Refer to the GKE documentation on local SSDs to learn about the differences between these options.
For modern node types, the following options are equivalent:
--set gcp.localSsdCount=4 # Legacy option (SCSI and NVMe)
--set gcp.localNvmeSsdBlockConfig.localSsdCount=4 # New option (NVMe only)
For ephemeral storage, use:
--set gcp.ephemeralStorageLocalSsdConfig.localSsdCount=4
All options are mutually exclusive. If you specify more than one, the node creation will fail.
You can also use the following annotations to configure the raw storage devices attached to your bin-selected nodes:
annotations:
node.elotl.co/gcp.localSsdCount: "4"
node.elotl.co/gcp.localNvmeSsdBlockConfig.localSsdCount: "4"
node.elotl.co/gcp.ephemeralStorageLocalSsdConfig.localSsdCount: "4"
Bin-packed nodes don't support this annotations-based configuration. Annotations override the default Luna configuration.
Important considerations
In all cases, ensure the number of attached local storage media is compatible with the instance types.
To disable these options, set the local number of raw block devices to 0.
Node Service Account
By default, Node VMs access the google cloud platform using the default service account. The option gcp.nodeServiceAccount can be set to the email address of an alternative service account to be used by the Luna-allocated Node VMs.
For example, if you would like to set an alternative google cloud platform service account to be used by the Luna-allocated Node VMs, do this:
--set gcp.nodeServiceAccount=myemail@myproject.iam.gserviceaccount.com
Network Tags
gcp.networkTags
specifies the Network tags to add to the nodes. gcp.networkTags
is a list of strings.
--set gcp.networkTags[0]=tag-value
--set gcp.networkTags[1]=other-tag-value
Empty by default.
GCE Instance Metadata
gcp.gceInstanceMetadata
specifies the metadata to add to the GCE instance backing the Kubernetes node. gcp.gceInstanceMetadata
is a dictionary.
--set gcp.gceInstanceMetadata.key1=value1
--set gcp.gceInstanceMetadata.key2=value2
Empty by default.
Bin Packing Zone Spread
When gcp.binPackingZoneSpread is true (default is false) on a regional GKE cluster, Luna supports placement of bin packing pods that specify zone spread. When this feature is enabled, Luna ensures there is a minimum of one bin packing node in each zone as long as there are bin packing pods running, giving kube-scheduler visibility into all zones.
Node Management: auto-upgrade and auto-repair
gcp.autoUpgrade
and gcp.autoRepair
define the node management services for the node pools. Both are true by default.
See the GKE documentation for NodeManagement for more information.
To disable auto-upgrade and auto-repair pass the following Helm values:
--set gcp.autoUpgrade=false
--set gcp.autoRepair=false
Note that to disable node auto-upgrade on node pools, the cluster must be configured to use static version instead of release channels. Otherwise node creation will fail.
Shielded Instance Configuration: secure boot and integrity monitoring
gcp.enableSecureBoot
and gcp.enableIntegrityMonitoring
configure the options controlling secure boot and integrity monitoring on the node pools. Both are false by default.
See the GKE documentation for ShieldedInstanceConfig for more information.
To enable secure boot and integrity monitoring pass the following Helm values:
--set gcp.enableSecureBoot=true
--set gcp.enableIntegrityMonitoring=true
Node Version
gcp.version
specifies the version of Kubernetes to run on the Luna managed nodes. It is empty by default. When the version is not specified each node will be started with the same version of Kubernetes running on the control plane.
To get the list of available versions you can run the following command:
gcloud container get-server-config --format="yaml(validNodeVersions)"
When the gcp.version
is not specified, the node will default to using the same version as the control plane. Consequently, if the control plane is updated, any existing node pools running older versions will no longer scale up. Instead, new node pools with the updated version will be created.
Ensure that the gcp.version
you select is compatible with your cluster. Incompatibility will prevent Luna from successfully provisioning the nodes.
OAuth Scopes
gcp.oauthScopes
specifies the set of Google API scopes to be made available on all of the node VMs under the "default" service account. It’s an array of strings. It is empty by default.
The specified scopes will be added to the built-in scopes. Built-in scopes are cluster type dependent. See the Google Kubernetes Engine documentation about OAuth Scopes to learn more.
For example to allow nodes to mount persistent storage and communicate with gcr.io add the following Helm values:
--set gcp.oauthScopes[0]=https://www.googleapis.com/auth/compute
--set gcp.oauthScopes[1]=https://www.googleapis.com/auth/devstorage.read_only
If you want to reset the gcp.oauthScopes
parameter after it has been set, you have a few options:
- Use
--set gcp.oauthScopes=null
during upgrades - Use
--set-json=[]
during upgrades - Set the parameter to an empty array in the values file
Resource Labels
gcp.resourceLabels
are resource labels added to nodes in the form of key-value pairs. By default, this mapping is empty.
To add a resource label foo=bar
to your nodes, use the following command:
--set gcp.resourceLabels.foo=bar
If your label key includes dots, you must escape them using a backslash \
. For example:
--set gcp.resourceLabels.my\.label\.example=value
For more detailed information, refer to the Google Cloud documentation on Labeling Resources.