Assigning Pods to Nodes
You can constrain a Pod so that it can only run on particular set of node(s). There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources). However, there are some circumstances where you may want to control which node the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different services that communicate a lot into the same availability zone.
You can use any of the following methods to choose where Kubernetes schedules specific Pods:
- nodeSelector field matching against node labels
- Affinity and anti-affinity
- nodeName field
Node labels
Like many other Kubernetes objects, nodes have labels. You can attach labels manually. Kubernetes also populates a standard set of labels on all nodes in a cluster. See Well-Known Labels, Annotations and Taints for a list of common node labels.
kubernetes.io/hostname
may be the same as the node name in some environments
and a different value in other environments.
Node isolation/restriction
Adding labels to nodes allows you to target Pods for scheduling on specific nodes or groups of nodes. You can use this functionality to ensure that specific Pods only run on nodes with certain isolation, security, or regulatory properties.
If you use labels for node isolation, choose label keys that the kubelet cannot modify. This prevents a compromised node from setting those labels on itself so that the scheduler schedules workloads onto the compromised node.
The NodeRestriction
admission plugin
prevents the kubelet from setting or modifying labels with a
node-restriction.kubernetes.io/
prefix.
To make use of that label prefix for node isolation:
- Ensure you are using the Node authorizer and have enabled the
NodeRestriction
admission plugin. - Add labels with the
node-restriction.kubernetes.io/
prefix to your nodes, and use those labels in your node selectors. For example,example.com.node-restriction.kubernetes.io/fips=true
orexample.com.node-restriction.kubernetes.io/pci-dss=true
.
nodeSelector
nodeSelector
is the simplest recommended form of node selection constraint.
You can add the nodeSelector
field to your Pod specification and specify the
node labels you want the target node to have.
Kubernetes only schedules the Pod onto nodes that have each of the labels you
specify.
See Assign Pods to Nodes for more information.
Affinity and anti-affinity
nodeSelector
is the simplest way to constrain Pods to nodes with specific
labels. Affinity and anti-affinity expands the types of constraints you can
define. Some of the benefits of affinity and anti-affinity include:
- The affinity/anti-affinity language is more expressive.
nodeSelector
only selects nodes with all the specified labels. Affinity/anti-affinity gives you more control over the selection logic. - You can indicate that a rule is soft or preferred, so that the scheduler still schedules the Pod even if it can't find a matching node.
- You can constrain a Pod using labels on other Pods running on the node (or other topological domain), instead of just node labels, which allows you to define rules for which Pods can be co-located on a node.
The affinity feature consists of two types of affinity:
- Node affinity functions like the
nodeSelector
field but is more expressive and allows you to specify soft rules. - Inter-pod affinity/anti-affinity allows you to constrain Pods against labels on other Pods.
Node affinity
Node affinity is conceptually similar to nodeSelector
, allowing you to constrain which nodes your
Pod can be scheduled on based on node labels. There are two types of node
affinity:
requiredDuringSchedulingIgnoredDuringExecution
: The scheduler can't schedule the Pod unless the rule is met. This functions likenodeSelector
, but with a more expressive syntax.preferredDuringSchedulingIgnoredDuringExecution
: The scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the Pod.
IgnoredDuringExecution
means that if the node labels
change after Kubernetes schedules the Pod, the Pod continues to run.
You can specify node affinities using the .spec.affinity.nodeAffinity
field in
your Pod spec.
For example, consider the following Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
In this example, the following rules apply:
- The node must have a label with the key
kubernetes.io/os
and the valuelinux
. - The node preferably has a label with the key
another-node-label-key
and the valueanother-node-label-value
.
You can use the operator
field to specify a logical operator for Kubernetes to use when
interpreting the rules. You can use In
, NotIn
, Exists
, DoesNotExist
,
Gt
and Lt
.
NotIn
and DoesNotExist
allow you to define node anti-affinity behavior.
Alternatively, you can use node taints
to repel Pods from specific nodes.
If you specify both nodeSelector
and nodeAffinity
, both must be satisfied
for the Pod to be scheduled onto a node.
If you specify multiple nodeSelectorTerms
associated with nodeAffinity
types, then the Pod can be scheduled onto a node if one of the specified nodeSelectorTerms
can be
satisfied.
If you specify multiple matchExpressions
associated with a single nodeSelectorTerms
,
then the Pod can be scheduled onto a node only if all the matchExpressions
are
satisfied.
See Assign Pods to Nodes using Node Affinity for more information.
Node affinity weight
You can specify a weight
between 1 and 100 for each instance of the
preferredDuringSchedulingIgnoredDuringExecution
affinity type. When the
scheduler finds nodes that meet all the other scheduling requirements of the Pod, the
scheduler iterates through every preferred rule that the node satisfies and adds the
value of the weight
for that expression to a sum.
The final sum is added to the score of other priority functions for the node. Nodes with the highest total score are prioritized when the scheduler makes a scheduling decision for the Pod.
For example, consider the following Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: with-affinity-anti-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: label-1
operator: In
values:
- key-1
- weight: 50
preference:
matchExpressions:
- key: label-2
operator: In
values:
- key-2
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
If there are two possible nodes that match the
requiredDuringSchedulingIgnoredDuringExecution
rule, one with the
label-1:key-1
label and another with the label-2:key-2
label, the scheduler
considers the weight
of each node and adds the weight to the other scores for
that node, and schedules the Pod onto the node with the highest final score.
kubernetes.io/os=linux
label.
Node affinity per scheduling profile
Kubernetes v1.20 [beta]
When configuring multiple scheduling profiles, you can associate
a profile with a node affinity, which is useful if a profile only applies to a specific set of nodes.
To do so, add an addedAffinity
to the args
field of the NodeAffinity
plugin
in the scheduler configuration. For example:
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
- schedulerName: foo-scheduler
pluginConfig:
- name: NodeAffinity
args:
addedAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: scheduler-profile
operator: In
values:
- foo
The addedAffinity
is applied to all Pods that set .spec.schedulerName
to foo-scheduler
, in addition to the
NodeAffinity specified in the PodSpec.
That is, in order to match the Pod, nodes need to satisfy addedAffinity
and
the Pod's .spec.NodeAffinity
.
Since the addedAffinity
is not visible to end users, its behavior might be
unexpected to them. Use node labels that have a clear correlation to the
scheduler profile name.
nodeAffinity
rules in the DaemonSet controller.
Inter-pod affinity and anti-affinity
Inter-pod affinity and anti-affinity allow you to constrain which nodes your Pods can be scheduled on based on the labels of Pods already running on that node, instead of the node labels.
Inter-pod affinity and anti-affinity rules take the form "this Pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more Pods that meet rule Y", where X is a topology domain like node, rack, cloud provider zone or region, or similar and Y is the rule Kubernetes tries to satisfy.
You express these rules (Y) as label selectors with an optional associated list of namespaces. Pods are namespaced objects in Kubernetes, so Pod labels also implicitly have namespaces. Any label selectors for Pod labels should specify the namespaces in which Kubernetes should look for those labels.
You express the topology domain (X) using a topologyKey
, which is the key for
the node label that the system uses to denote the domain. For examples, see
Well-Known Labels, Annotations and Taints.
topologyKey
.
If some or all nodes are missing the specified topologyKey
label, it can lead
to unintended behavior.
Types of inter-pod affinity and anti-affinity
Similar to node affinity are two types of Pod affinity and anti-affinity as follows:
requiredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingIgnoredDuringExecution
For example, you could use
requiredDuringSchedulingIgnoredDuringExecution
affinity to tell the scheduler to
co-locate Pods of two services in the same cloud provider zone because they
communicate with each other a lot. Similarly, you could use
preferredDuringSchedulingIgnoredDuringExecution
anti-affinity to spread Pods
from a service across multiple cloud provider zones.
To use inter-pod affinity, use the affinity.podAffinity
field in the Pod spec.
For inter-pod anti-affinity, use the affinity.podAntiAffinity
field in the Pod
spec.
Pod affinity example
Consider the following Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: k8s.gcr.io/pause:2.0
This example defines one Pod affinity rule and one Pod anti-affinity rule. The
Pod affinity rule uses the "hard"
requiredDuringSchedulingIgnoredDuringExecution
, while the anti-affinity rule
uses the "soft" preferredDuringSchedulingIgnoredDuringExecution
.
The affinity rule says that the scheduler can only schedule a Pod onto a node if
the node is in the same zone as one or more existing Pods with the label
security=S1
. More precisely, the scheduler must place the Pod on a node that has the
topology.kubernetes.io/zone=V
label, as long as there is at least one node in
that zone that currently has one or more Pods with the Pod label security=S1
.
The anti-affinity rule says that the scheduler should try to avoid scheduling
the Pod onto a node that is in the same zone as one or more Pods with the label
security=S2
. More precisely, the scheduler should try to avoid placing the Pod on a node that has the
topology.kubernetes.io/zone=R
label if there are other nodes in the
same zone currently running Pods with the Security=S2
Pod label.
See the design doc for many more examples of Pod affinity and anti-affinity.
You can use the In
, NotIn
, Exists
and DoesNotExist
values in the
operator
field for Pod affinity and anti-affinity.
In principle, the topologyKey
can be any allowed label key with the following
exceptions for performance and security reasons:
- For Pod affinity and anti-affinity, an empty
topologyKey
field is not allowed in bothrequiredDuringSchedulingIgnoredDuringExecution
andpreferredDuringSchedulingIgnoredDuringExecution
. - For
requiredDuringSchedulingIgnoredDuringExecution
Pod anti-affinity rules, the admission controllerLimitPodHardAntiAffinityTopology
limitstopologyKey
tokubernetes.io/hostname
. You can modify or disable the admission controller if you want to allow custom topologies.
In addition to labelSelector
and topologyKey
, you can optionally specify a list
of namespaces which the labelSelector
should match against using the
namespaces
field at the same level as labelSelector
and topologyKey
.
If omitted or empty, namespaces
defaults to the namespace of the Pod where the
affinity/anti-affinity definition appears.
Namespace selector
Kubernetes v1.22 [beta]
You can also select matching namespaces using namespaceSelector
, which is a label query over the set of namespaces.
The affinity term is applied to namespaces selected by both namespaceSelector
and the namespaces
field.
Note that an empty namespaceSelector
({}) matches all namespaces, while a null or empty namespaces
list and
null namespaceSelector
matches the namespace of the Pod where the rule is defined.
PodAffinityNamespaceSelector
in both kube-apiserver and kube-scheduler.
More practical use-cases
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure that a set of workloads should be co-located in the same defined topology, eg., the same node.
Take, for example, a three-node cluster running a web application with an in-memory cache like redis. You could use inter-pod affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
In the following example Deployment for the redis cache, the replicas get the label app=store
. The
podAntiAffinity
rule tells the scheduler to avoid placing multiple replicas
with the app=store
label on a single node. This creates each cache in a
separate node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
The following Deployment for the web servers creates replicas with the label app=web-store
. The
Pod affinity rule tells the scheduler to place each replica on a node that has a
Pod with the label app=store
. The Pod anti-affinity rule tells the scheduler
to avoid placing multiple app=web-store
servers on a single node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: web-store
replicas: 3
template:
metadata:
labels:
app: web-store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx:1.16-alpine
Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.
node-1 | node-2 | node-3 |
---|---|---|
webserver-1 | webserver-2 | webserver-3 |
cache-1 | cache-2 | cache-3 |
See the ZooKeeper tutorial for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique as this example.
nodeName
nodeName
is a more direct form of node selection than affinity or
nodeSelector
. nodeName
is a field in the Pod spec. If the nodeName
field
is not empty, the scheduler ignores the Pod and the kubelet on the named node
tries to place the Pod on that node. Using nodeName
overrules using
nodeSelector
or affinity and anti-affinity rules.
Some of the limitations of using nodeName
to select nodes are:
- If the named node does not exist, the Pod will not run, and in some cases may be automatically deleted.
- If the named node does not have the resources to accommodate the Pod, the Pod will fail and its reason will indicate why, for example OutOfmemory or OutOfcpu.
- Node names in cloud environments are not always predictable or stable.
Here is an example of a Pod spec using the nodeName
field:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kube-01
The above Pod will only run on the node kube-01
.
What's next
- Read more about taints and tolerations .
- Read the design docs for node affinity and for inter-pod affinity/anti-affinity.
- Learn about how the topology manager takes part in node-level resource allocation decisions.
- Learn how to use nodeSelector.
- Learn how to use affinity and anti-affinity.