云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)(下)

本文涉及的产品
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
云解析 DNS,旗舰版 1个月
全局流量管理 GTM,标准版 1个月
简介: 云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)

云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)(上):https://developer.aliyun.com/article/1399626


生成kube-proxy.kubeconfig文件

vim /opt/kubernetes/ssl/kube-proxy-csr.json

 

{"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

生成证书:

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes /opt/kubernetes/ssl/kube-proxy-csr.json | cfssljson -bare kube-proxy

复制证书:

cp kube-proxy-key.pem kube-proxy.pem /opt/kubernetes/ssl/

生成kubeconfig文件:

KUBE_APISERVER="https://192.168.217.16:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

服务启动脚本:

[root@master bin]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

启动并设置开机启动:

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

(14)

工作节点部署(在master节点的基础上部署)

拷贝相关文件到node1节点(配置文件,证书文件和可执行文件以及服务启动脚本,这里是演示的node1节点,node2节点一样的操作啦):

先在node1服务器上建立相关文件夹:

mkdir -p /opt/kubernetes/{cfg,bin,ssl,logs}
scp /opt/kubernetes/bin/{kubelet,kube-proxy}   k8s-node1:/opt/kubernetes/bin/
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service}   k8s-node1:/usr/lib/systemd/system/
scp /opt/kubernetes/cfg/{kubelet.conf,kube-proxy.conf,kube-proxy-config.yml,kubelet-config.yml,bootstrap.kubeconfig,kube-proxy.kubeconfig}  k8s-node1:/opt/kubernetes/cfg/
scp /opt/kubernetes/ssl/{ca-key.pem,ca.pem,kubelet.crt,kubelet.key,kube-proxy-key.pem,kube-proxy.pem,server-key.pem,server.pem}  k8s-node1:/opt/kubernetes/ssl/

两个文件修改修改主机名:

kube-proxy-config.yml和kubelet.conf 这两个文件里的--hostname-override=的值修改为当前主机名,比如,是在node1服务器上修改,就写k8s-node1

剩下的都不需要改动,启动服务就好了,服务状态检查,看成功后,在master服务器上批准节点加入就可以啦(例如,node1节点的批准):

[root@master ssl]# k get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U   51s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
[root@master ssl]# kubectl certificate approve node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U
certificatesigningrequest.certificates.k8s.io/node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U approved

在master上查看结果,验证是否正常:

[root@master cfg]# k get no
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   <none>   4h40m   v1.18.3
k8s-node1    NotReady   <none>   34m     v1.18.3
k8s-node2    NotReady   <none>   33m     v1.18.3

此时节点状态是没有准备,原因如下(cni插件配置没有初始化):

Aug 27 15:58:56 master kubelet: E0827 15:58:56.093236   14623 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

15,

安装网络插件

导入docker镜像包quay.io_coreos_flannel_v0.13.0.tar,所有节点都导入,也就是docker load  <quay.io_coreos_flannel_v0.13.0.tar

[root@master cfg]# cat ../bin/kube-flannel.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

kubectl apply -f kube-flannel.yaml  执行上面的文件就安装好了网络插件。

(15)COREDNS的安装

总共5个配置文件,其中一个是测试dns的文件:

ServiceAccount的

cat << EOF > coredns-ns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
EOF

coredns 的configMap:

cat << EOF > coredns-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
EOF

coredns的用户角色权限清单文件:

cat << EOF > coredns-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
EOF

coredns部署文件

cat << EOF > coredns-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
               matchExpressions:
               - key: k8s-app
                 operator: In
                 values: ["kube-dns"]
             topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: registry.aliyuncs.com/google_containers/coredns:1.8.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
EOF

coredns的service文件,这里需要注意,clusterIP的值是和kubelet-config.yml 里的值一致的。

cat << EOF > coredns-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

coredns的测试:

启动一个临时pod,以运行测试命令:

kubectl run -i --tty --image busybox:1.28.3 -n web  dns-test --restart=Never --rm

解析集群内外域名都没有问题,输出是以下的表示coredns功能正常。

/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes.default
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes.default
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup baidu.com
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      baidu.com
Address 1: 110.242.68.66
Address 2: 39.156.66.10
/ # cat /etc/resolv.conf 
nameserver 10.0.0.2
search web.svc.cluster.local svc.cluster.local cluster.local localdomain default.svc.cluster.local
options ndots:5

(16)

dashboard的安装

kubernetesui_metrics-scraper_v1.0.6.tar和dashboard.tar这两个网盘内的文件三节点都使用docker load 导入,因为是deployment方式部署,不清楚到底是在哪个节点部署。

集群角色绑定

kubectl create clusterrolebinding default --clusterrole=cluster-admin --serviceaccount=kube-system:default --namespace=kube-system

安装部署yaml文件内容如下(文件内容比较多,执行这个文件  kubectl apply -f dashboard.yml);

[root@master ~]# cat dashboard.yml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30008
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

获取登录token:

kubectl describe secrets $(kubectl describe sa default -n kube-system | grep Mountable | awk 'NR == 2 {next} {print $3}') -n kube-system

登录地址:https://nodeip:30008

那么,使用配置文件登录呢?那个文件就是bootstrap.kubeconfig,将这个文件拷贝到桌面,然后登录 的时候选择文件即可了,文件内容应该如下:

[root@master ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVRDYzSGpYeFRiM3EzdGZUeEM4QjZwalUzYUVRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl5TURneU56QXhNVFV3TUZvWERUSTNNRGd5TmpBeE1UVXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeitLb3pMQlVEYXNQTmxKc2lMSXoKZHRUR0M4a2tvNnVJZjVUSUNkY3pPemJyaks1TVJ4UzYrd2ZwVzNHOGFtN2J1QlcvRE1hcW15dGNlbEwxd0VpMwoxUGZTNE9oWXRlczUwWU4wMkZrZ2JCYmVBSVN3NnJ5cnRadGFxdWhZeHUwQjlOLzVuZGhETUx2ZFhFV1NYYWZrCmtWQXNnTFZ0dmNPMCtKVUt3OGE5eFJSRTkyWThZYXZ0azN4M3VBU2hTejUrS3FQZ1V6Q2x2a2N4UUpXVFBiTkUKOEpERXlaY0I0ay8za0NuOGtsREc3Um9Wck1hcHJ6Z3lNQkNVOEgzS1hsM0FJdkFYNGVQQTFOVGJzbUhWaDdCcgpmeWdLT0x5RHA3OUswbkp1c0NtY1JmRGJ4TWJMVCtNeU01Y0NFcm1LMkVnSTRuYXIyMndQQU5kemRTb1dIbDljCkp3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVZG1mUXNkMy85czVoKzl0V1dDMHhBL1htSENZd0h3WURWUjBqQkJnd0ZvQVVkbWZRc2QzLwo5czVoKzl0V1dDMHhBL1htSENZd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJR0lPa0xDMlpxL3Y4dStNelVyCkpVRUY4SFV2QzdldkdXV3ZMMmlqd3dxTnJMWE1XYWd6UWJxcDM5UTVlRnRxdngzSWEveFZFZ0ZTTnBSRjNMNGYKN0VYUlYxRXpJVlUxeThaSnZzVXl1aFQyVENCQ3UwMkdvSWc1VGlJNzMwak5xQllMRGh6SVJKLzBadEtTQlcwaApIUEo3eGRybnZSdnY3WG9uT1dCbldBTUhJY0N0TzNLYlovbXY1VHBoTnkzWHJMSTdRaFVvWVlQSXN5N1BvUjhVCm9WVm80RkRRUDFPYXhGSzljSE1DNWNuLzFNSnRZUGpVRzg5RldEc01HbWVVZVZ1cnhsVStkVlFUMUZzOWJxanoKaDJWaHNtanFCK3RCbjVGdENOaEY5STVxYlJuMWJmTGRpQzl2QzJ3U00xSDZQVWRxeHB6ZlRaVHhSbEptdmtjZAo1ZTQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.217.16:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: c47ffb939f5ca36231d9e3121a252940

两个地方需要特别注意,一个是token,这个是token.csv文件里定义的,一个是kubelet-bootstrap用户,可能使用config文件会登录失败,提示权限不足,

那么,就先删除用户的角色绑定,重新绑定为admin,代码如下:

kubectl delete clusterrolebindings kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=cluster-admin  --user=kubelet-bootstrap

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
26天前
|
Kubernetes Cloud Native Docker
云原生时代的容器化实践:Docker和Kubernetes入门
【10月更文挑战第37天】在数字化转型的浪潮中,云原生技术成为企业提升敏捷性和效率的关键。本篇文章将引导读者了解如何利用Docker进行容器化打包及部署,以及Kubernetes集群管理的基础操作,帮助初学者快速入门云原生的世界。通过实际案例分析,我们将深入探讨这些技术在现代IT架构中的应用与影响。
84 2
|
15天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
22天前
|
Kubernetes Cloud Native 开发者
云原生入门:Kubernetes的简易指南
【10月更文挑战第41天】本文将带你进入云原生的世界,特别是Kubernetes——一个强大的容器编排平台。我们将一起探索它的基本概念和操作,让你能够轻松管理和部署应用。无论你是新手还是有经验的开发者,这篇文章都能让你对Kubernetes有更深入的理解。
|
20天前
|
运维 Kubernetes Cloud Native
云原生技术入门:Kubernetes和Docker的协同工作
【10月更文挑战第43天】在云计算时代,云原生技术成为推动现代软件部署和运行的关键力量。本篇文章将带你了解云原生的基本概念,重点探讨Kubernetes和Docker如何协同工作以支持容器化应用的生命周期管理。通过实际代码示例,我们将展示如何在Kubernetes集群中部署和管理Docker容器,从而为初学者提供一条清晰的学习路径。
|
23天前
|
Kubernetes 负载均衡 Cloud Native
探索Kubernetes:云原生应用的基石
探索Kubernetes:云原生应用的基石
|
26天前
|
Kubernetes 监控 负载均衡
深入云原生:Kubernetes 集群部署与管理实践
【10月更文挑战第37天】在数字化转型的浪潮中,云原生技术以其弹性、可扩展性成为企业IT架构的首选。本文将引导你了解如何部署和管理一个Kubernetes集群,包括环境准备、安装步骤和日常维护技巧。我们将通过实际代码示例,探索云原生世界的秘密,并分享如何高效运用这一技术以适应快速变化的业务需求。
60 1
|
1月前
|
运维 Kubernetes Cloud Native
Kubernetes云原生架构深度解析与实践指南####
本文深入探讨了Kubernetes作为领先的云原生应用编排平台,其设计理念、核心组件及高级特性。通过剖析Kubernetes的工作原理,结合具体案例分析,为读者呈现如何在实际项目中高效部署、管理和扩展容器化应用的策略与技巧。文章还涵盖了服务发现、负载均衡、配置管理、自动化伸缩等关键议题,旨在帮助开发者和运维人员掌握利用Kubernetes构建健壮、可伸缩的云原生生态系统的能力。 ####
|
17天前
|
Kubernetes Cloud Native 云计算
云原生入门:Kubernetes 和容器化基础
在这篇文章中,我们将一起揭开云原生技术的神秘面纱。通过简单易懂的语言,我们将探索如何利用Kubernetes和容器化技术简化应用的部署和管理。无论你是初学者还是有一定经验的开发者,本文都将为你提供一条清晰的道路,帮助你理解和运用这些强大的工具。让我们从基础开始,逐步深入了解,最终能够自信地使用这些技术来优化我们的工作流程。
|
3月前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
564 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
3月前
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
120 2
二进制部署Kubernetes 1.23.15版本高可用集群实战