From 2b00561fa4ce544b956d51a2407e632f773f9815 Mon Sep 17 00:00:00 2001 From: abhu85 <60182103+abhu85@users.noreply.github.com> Date: Thu, 12 Mar 2026 20:26:09 -0500 Subject: [PATCH] docs: fix incorrect label references in Graviton scheduling docs The documentation incorrectly referenced `tainted=yes` label when the actual nodeSelector uses `kubernetes.io/arch=arm64`. This was confusing because: 1. Line 35 mentioned "tainted=yes" but the manifest uses "kubernetes.io/arch: arm64" 2. Line 88 said the pod was pinned to "tainted=yes" but it was actually pinned to "kubernetes.io/arch=arm64" Updated the explanations to correctly reference the ARM64 architecture label that is actually used in the examples. Fixes #1807 Co-Authored-By: Claude Opus 4.6 --- .../managed-node-groups/graviton/scheduling-graviton.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/fundamentals/compute/managed-node-groups/graviton/scheduling-graviton.md b/website/docs/fundamentals/compute/managed-node-groups/graviton/scheduling-graviton.md index 6ce14f88e5..0b2ef204af 100644 --- a/website/docs/fundamentals/compute/managed-node-groups/graviton/scheduling-graviton.md +++ b/website/docs/fundamentals/compute/managed-node-groups/graviton/scheduling-graviton.md @@ -32,7 +32,7 @@ Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists fo As anticipated, the application is running successfully on a non-tainted node. The associated pod is in a `Running` status and we can confirm that no custom tolerations have been configured. Note that Kubernetes automatically adds tolerations for `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` with `tolerationSeconds=300`, unless you or a controller set those tolerations explicitly. These automatically-added tolerations mean that Pods remain bound to Nodes for 5 minutes after one of these problems is detected. -Let's update our `ui` deployment to bind its pods to our tainted managed node group. We have pre-configured our tainted managed node group with a label of `tainted=yes` that we can use with a `nodeSelector`. The following `Kustomize` patch describes the changes needed to our deployment configuration in order to enable this setup: +Let's update our `ui` deployment to bind its pods to our tainted managed node group. Graviton-based nodes are identified by the `kubernetes.io/arch=arm64` label, which we can use with a `nodeSelector`. The following `Kustomize` patch describes the changes needed to our deployment configuration in order to enable this setup: ```kustomization modules/fundamentals/mng/graviton/nodeselector-wo-toleration/deployment.yaml @@ -85,7 +85,7 @@ Events: Warning FailedScheduling 19s default-scheduler 0/4 nodes are available: 1 node(s) had untolerated taint {frontend: true}, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. ``` -Our changes are reflected in the new configuration of the `Pending` pod. We can see that we have pinned the pod to any node with the `tainted=yes` label but this introduced a new problem as our pod cannot be scheduled (`PodScheduled False`). A more useful explanation can be found under the `events`: +Our changes are reflected in the new configuration of the `Pending` pod. We can see that we have pinned the pod to any node with the `kubernetes.io/arch=arm64` label but this introduced a new problem as our pod cannot be scheduled (`PodScheduled False`). A more useful explanation can be found under the `events`: ```text 0/4 nodes are available: 1 node(s) had untolerated taint {frontend: true}, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.