In particular, there are cases where the pending state persists despite not running out of
CPU or memory.
There is a case that it is due to lack of private ip.If the node group is shown as "Degraded"
in the EKS cluster configuration and you can see the following error in Health issues.
"Amazon Autoscaling was unable to launch instances because there are not enough free addresses
in the subnet associated with your AutoScaling group(s)."
And you can see that the number of "Available IP4 addresses" in the AWS VPC subnet used in the
node group is 0.
By designating the IP that the node group occupies, you can get some IPs back.
kubectl set env -n kube-system daemonset/aws-node MINIMUM_IP_TARGET=10 WARM_IP_TARGET=2
kubectl get daemonset -n kube-system aws-node -o json | jq -r '.spec.template.spec.containers[] |select ( .name == "aws-node" ).env'
You can see that the number of "Available IP4 addresses" in the AWS VPC subnet is increased.
Nevertheless, if IPs are not enough, consider two approaches.
1. Check the HA status and adjust appropriately to avoid the case where too many pods are
created due to the cpu and memory allocated to the application being too small.
kubectl get hpa -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
test-nginx Deployment/test-nginx 10%/80%, 9%/80% 7 200 7 14d
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: test-nginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-nginx
minReplicas: 7
maxReplicas: 200
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80 =>
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80 =>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nginx
spec:
selector:
matchLabels:
app: test-nginx
replicas: 7
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
template:
spec:
containers:
- name: nginx
imagePullPolicy: Always
resources:
requests:
memory: "200Mi" =>
cpu: "100m" =>
limits:
memory: "1Gi"
cpu: "500m"
nodeSelector:
team: test
environment: prod
2. If IPs are not enough even after tuning, add a node group to which another subnet is
assigned. (This below is a terraform snippet.)
test = {
desired_capacity = 5
max_capacity = 15
min_capacity = 4
subnets = [element(module.vpc.private_subnets, 0)]
disk_size = 30
k8s_labels = {
team = "test"
environment = "prod"
}
},
test2 = { =>
desired_capacity = 5
max_capacity = 15
min_capacity = 4
subnets = [element(module.vpc.private_subnets, 4)]
disk_size = 30
k8s_labels = {
team = "test"
environment = "prod"
}
},
Comments
Post a Comment