前言:
ResourceQuota,直译资源配额
可为每个名称空间分别创建一个 ResourceQuota 对象,随后 ,用户在名 称空间中创建资源对象, ResourceQuota 准入控制器将跟踪使用情况以确保它不超过相应 ResourceQuota 对象中定义的系统资源限制。 用户创建或更新资源的操作违反配额约束将导 致请求失败, API Server 以 HTTP 状态代码“403 FORBIDDEN ”作为响应,并显示一条消 息以提示可能违反的约束。 不过,在名称空间上启用了 CPU 和内存等系统资源的配额后, 用户创建 Pod 对象时必须指定资源需求或资源限制,否则,会触发 ResourceQuota 准入控制 器拒绝执行相应的操作。
以上的定义比较模糊,大白话说明一下,第一,Resource指的的CPU,内存,pod副本数量这些,第二,该控制器是按照namespace来划分的,也就是说一个namespace定义一个ResourceQuota,该定义是通过资源清单文件来实现的,比如,有namespace A ,B,C, 那么,相对A 这个namespace,可以编写一个yaml文件,定义此namespace所能使用的资源额度,然后,在此namespace下的所有pod所使用的资源总和不可以超过yaml定义的资源配额。如果超出控制器定义的配额,新创建的pod将会失败,apiserver会报错。
下面,将以实际例子来讲述如何使用资源配额控制器。
一,
在namespace myrq 下创建resourcequota控制器,相关代码如下:
kubectl create ns myrq
cat quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myrq namespace: myrq spec: hard: pods: "2"
这个resourcequota仅仅定义了pod的副本,在namespace myrq下的总配额是2
应用此yaml文件并查看resourcequota的详情:
kubectl apply -f quota.yaml kubectl describe resourcequotas -n myrq myrq
root@k8s-master:~# kubectl describe resourcequotas -n myrq myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- pods 0 2
此时的pods数量已使用是0,配额限制是2,此时查看myrq下的pod,可以发现确实是0个:
root@k8s-master:~# kubectl get po -n myrq No resources found in myrq namespace.
OK,现在使用配额,创建3个pod,看能不能成功:
kubectl create deployment nginx --image=nginx --replicas=3 -n myrq
root@k8s-master:~# kubectl describe deployments.apps -n myrq nginx Name: nginx Namespace: myrq CreationTimestamp: Sat, 21 Jan 2023 12:26:34 +0800 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx Replicas: 3 desired | 2 updated | 2 total | 0 available | 3 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable ReplicaFailure True FailedCreate Progressing True ReplicaSetUpdated OldReplicaSets: <none> NewReplicaSet: nginx-6799fc88d8 (2/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 16s deployment-controller Scaled up replica set nginx-6799fc88d8 to 3 root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx 2/3 2 2 2m42s root@k8s-master:~# kubectl get po -n myrq NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-4rznt 1/1 Running 0 45s nginx-6799fc88d8-bxf4w 1/1 Running 0 45s root@k8s-master:~# kubectl describe resourcequotas -n myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- pods 2 2
OK,可以看到配额已经使用完了,但此deployment控制器是3个副本,因此只有两个pod创建成功,资源配额生效了。
在创建一个新的pod,能否成功呢?
kubectl create deployment nginx1 --image=nginx --replicas=1 -n myrq
可以看到,新的pod没有创建成功
root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx 2/3 2 2 4m38s nginx1 0/1 0 0 23s
提高配额:
从2提升到20
root@k8s-master:~# cat quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myrq namespace: myrq spec: hard: pods: "20"
root@k8s-master:~# kubectl apply -f quota.yaml resourcequota/myrq configured root@k8s-master:~# kubectl describe resourcequotas -n myrq myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- pods 2 20
删除nginx1,重新部署,可以看到配额提高后,可以顺利部署了,三副本的nginx也自动恢复成了正常的状态:
root@k8s-master:~# kubectl delete deployments.apps -n myrq nginx1 deployment.apps "nginx1" deleted root@k8s-master:~# kubectl create deployment nginx1 --image=nginx --replicas=1 -n myrq deployment.apps/nginx1 created root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 19m nginx1 1/1 1 1 6m5s root@k8s-master:~# kubectl describe resourcequotas -n myrq myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- pods 4 20
二,
OK,现在修改上面的配额定义yaml文件,修改成如下:
root@k8s-master:~# cat quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myrq namespace: myrq spec: hard: pods: "20" requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi count/deployments.apps: "3" count/deployments.extensions: "3" persistentvolumeclaims: "2"
新的yaml文件应用后,查看详情:
root@k8s-master:~# kubectl describe resourcequotas -n myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- count/deployments.apps 2 3 count/deployments.extensions 0 3 limits.cpu 0 2 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 4 20 requests.cpu 0 1 requests.memory 0 1Gi
定义的deployment控制器配额是3个,前面已经创建了nginx和nginx1两个控制器,nginx是三副本,nginx1是一pod,因此是4pods,都是符合实际情况的
OK,在创建两个deployment控制器,看看能否成功:
nginx2的pod运行不正常,由于配额yaml文件重新apple了,指定了cpu和内存,因此,需要更改部署
root@k8s-master:~# kubectl create deployment nginx2 --image=nginx --replicas=1 -n myrq deployment.apps/nginx2 created root@k8s-master:~# kubectl create deployment nginx3 --image=nginx --replicas=1 -n myrq error: failed to create deployment: deployments.apps "nginx3" is forbidden: exceeded quota: myrq, requested: count/deployments.apps=1, used: count/deployments.apps=3, limited: count/deployments.apps=3 root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 34m nginx1 1/1 1 1 21m nginx2 0/1 0 0 4m8s
在创建第四个deployment的时候,apiserver给我们返回了一个错误,提示创建失败,已经使用了三个,配额也是三个,因此失败。
OK,删除nginx2这个deployment控制器,重新创建,使用CPU和内存的配额:
root@k8s-master:~# cat quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myrq namespace: myrq spec: hard: pods: "20" requests.cpu: "20" requests.memory: 20Gi limits.cpu: "2" limits.memory: 2Gi count/deployments.apps: "3" count/deployments.extensions: "3" persistentvolumeclaims: "2"
root@k8s-master:~# cat dep-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx2 name: nginx2 namespace: myrq spec: replicas: 3 selector: matchLabels: app: nginx2 strategy: {} template: metadata: creationTimestamp: null labels: app: nginx2 spec: containers: - image: nginx:1.18 name: nginx2 resources: requests: cpu: 500m memory: 500Mi limits: cpu: 1000m memory: 1Gi status: {}
应用以上文件后,发现nginx2只有两个副本:
root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx1 1/1 1 1 50m nginx2 2/3 2 2 3m20s
查看配额,发现内存的配额用完了,三个副本,因此,limits使用了3G,但只有2G的配额,CPU也是只有两个配额,因此,少一个副本:
root@k8s-master:~# kubectl describe resourcequotas -n myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- count/deployments.apps 2 3 count/deployments.extensions 0 3 limits.cpu 2 2 limits.memory 2Gi 2Gi persistentvolumeclaims 0 2 pods 3 20 requests.cpu 1 20 requests.memory 1000Mi 20Gi
修改配额,增加CPU和内存的配额,CPU调整为20,内存调整为20G,重新apply两个文件:
root@k8s-master:~# cat quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: myrq namespace: myrq spec: hard: pods: "20" requests.cpu: "20" requests.memory: 20Gi limits.cpu: "20" limits.memory: 20Gi count/deployments.apps: "3" count/deployments.extensions: "3" persistentvolumeclaims: "2"
查看nginx2,可以发现部署正确了:
root@k8s-master:~# kubectl get deployments.apps -n myrq NAME READY UP-TO-DATE AVAILABLE AGE nginx1 1/1 1 1 57m nginx2 3/3 3 3 9s
查看配额:
一个pod用一个cpu嘛,所以三个副本3个cpu,内存同理
root@k8s-master:~# kubectl describe resourcequotas -n myrq Name: myrq Namespace: myrq Resource Used Hard -------- ---- ---- count/deployments.apps 2 3 count/deployments.extensions 0 3 limits.cpu 3 20 limits.memory 3Gi 20Gi persistentvolumeclaims 0 2 pods 4 20 requests.cpu 1500m 20 requests.memory 1500Mi 20Gi