提问者:小点点

Kubernetes工作节点的cpu和内存请求始终保持为零


嗨,我是库伯内特的新朋友。

1)无法在工作节点中缩放容器/单元。其内存使用量始终保持为零。有什么原因吗?

2)每当我缩放窗格/容器时,它总是在主节点中创建。

3) 有没有办法限制特定节点上的pod?

4) 当我缩放时,豆荚是如何分裂的?

任何挪用的帮助。

kubectl版本

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

kubectl描述节点

Name:               worker-node
Roles:              worker
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=worker-node
                    node-role.kubernetes.io/worker=worker
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 19 Feb 2019 15:03:33 +0530
Taints:             node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------    -----------------                 ------------------                ------                       -------
  MemoryPressure   False     Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True      Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:23 +0530   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False     Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True      Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletReady                 kubelet is posting ready status. AppArmor enabled
  OutOfDisk        Unknown   Tue, 19 Feb 2019 15:03:33 +0530   Tue, 19 Feb 2019 15:25:47 +0530   NodeStatusNeverUpdated       Kubelet never posted node status.
Addresses:
  InternalIP:  192.168.1.10
  Hostname:    worker-node
Capacity:
 cpu:                4
 ephemeral-storage:  229335396Ki
 hugepages-2Mi:      0
 memory:             16101704Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  211355500604
 hugepages-2Mi:      0
 memory:             15999304Ki
 pods:               110
System Info:
 Machine ID:                 1082300ebda9485cae458a9761313649
 System UUID:                E4DAAC81-5262-11CB-96ED-94898013122F
 Boot ID:                    ffd5ce4b-437f-4497-9337-e72c06f88429
 Kernel Version:             4.15.0-45-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.1
 Kubelet Version:            v1.13.3
 Kube-Proxy Version:         v1.13.3
PodCIDR:                     192.168.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----    ------------  ----------  ---------------  -------------  ---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:
  Type     Reason                Age                     From                       Message
  ----     ------                ----                    ----                       -------
  Normal   Starting              55m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              55m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              33m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              11m                     kube-proxy, worker-node  Starting kube-proxy.
  Warning  EvictionThresholdMet  65s (x1139 over 3h31m)  kubelet, worker-node     Attempting to reclaim ephemeral-storage


共1个答案

匿名用户

这很奇怪,默认情况下 kubernetes 有标签将主节点从 pod 执行中排除。

kubectl get nodes --show-labels

现在检查标签

node-role.kubernetes.io/master=true:NoSchedule

如果您的主服务器没有这个标签,您可以保留主服务器:

kubectl taint nodes $HOSTNAME node-role.kubernetes.io/master=true:NoSchedule