Pod Failed: Worker Node Ran Out of Memory - Kubernetes

Kubernetes Appliance Software Installation Guide

Product
Kubernetes
Published
June 2018
Language
English (United States)
Last Update
2019-01-31
dita:mapPath
kpy1524680839041.ditamap
dita:ditavalPath
ft:empty
dita:id
kpy1524680839041
Product Category
Software
Review the /var/log/messages log file to see the status of the pod. In the following example, the failed pod is on backend8-123 (192.168.123.73). The log file shows that the worker node ran out of memory:
2018-04-11T17:56:25.619798-04:00 backend8-123 kernel: [   69.622469] Out of memory: Kill process 4086 (java) score 1383 or sacrifice child
The log also shows that the container within the failed pod is not re-starting:
2018-04-11T19:30:48.131646-04:00 backend8-123 kubelet[1819]: E0411 19:30:48.121650    1819 pod_workers.go:186] Error syncing pod 0d0131c3-3dd2-11e8-b9cd-525400447302 ("elasticserch-logging-846bf5db8b-npf6x_kube-system(0d0131c3-3dd2-11e8-b9cd-525400447302)"), skipping: failed to "StartContainer" for "elasticsearch-logging" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=elasticsearch-logging pod=elasticserch-logging-846bf5db8b-npf6x_kube-system(0d0131c3-3dd2-11e8-b9cd-525400447302)"
Based on this, it can be determined that the worker node needs more memory. See https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/.