开启阿里云的镜像加速
打开阿里云官网,然后搜索容器镜像服务。
根据文档一步步操作:
到这里,恭喜你,docker安装好了!
4.添加阿里云的 Kubernetes 的 YUM 源
由于kubernetes的镜像源在国外,非常慢,这里切换成国内的阿里云镜像源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
5.安装kubeadm、kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
yum install -y kubelet-1.21.10 kubeadm-1.21.10 kubectl-1.21.10
为了实现 Docker 使用的 cgroup drvier 和 kubelet 使用的 cgroup drver 一致,建议修改/etc/sysconfig/kubelet
文件的内容:
vim /etc/sysconfig/kubelet
# 修改 KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs"
vim /etc/docker/daemon.json
添加:
{ "exec-opts":["native.cgroupdriver=systemd"] }
重启:
systemctl restart docker docker info | grep -i cgroup
设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:
systemctl enable kubelet
6.查看并安装Kubernetes所需镜像
查看 Kubernetes 安装所需镜像:
kubeadm config images list
下载 Kubernetes 安装所需镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.13 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.13 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.13 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.13 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0
给 coredns 镜像重新打 tag :
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 registry
7.部署 Kubernetes 的 Master 节点
在 192.168.172.101 机器上部署 Kubernetes 的 Master 节点:
由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址
kubeadm init \ --apiserver-advertise-address=192.168.172.101 \ --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version=v1.21.13 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16
注意:
apiserver-advertise-address 一定要是主机的 IP 地址。
apiserver-advertise-address 、service-cidr 和 pod-network-cidr 不能在同一个网络范围内。
不要使用 172.17.0.1/16 网段范围,因为这是 Docker 默认使用的。
成功的话,会有如下日志:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.172.101:6443 --token zrelll.4hcypbur50301tm4 \ --discovery-token-ca-cert-hash sha256:3c0ea6e5d64d95ec454c4db6f2b88ab50918802845231a334bc7aaef00782c5d
根据日志提示操作,在 192.168.172.101 执行如下命令:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 如果是 root 用户,还可以执行如下命令 export KUBECONFIG=/etc/kubernetes/admin.conf
默认的 token 有效期为 2 小时,当过期之后,该 token 就不能用了,这时可以使用如下的命令创建 token :
kubeadm token create --print-join-command
# 生成一个永不过期的token kubeadm token create --ttl 0 --print-join-command
8.部署 Kubernetes 的 Node节点
根据日志提示操作,在 192.168.172.102 和 192.168.172.103 执行如下命令:
kubeadm join 192.168.172.101:6443 --token zrelll.4hcypbur50301tm4 \ --discovery-token-ca-cert-hash sha256:3c0ea6e5d64d95ec454c4db6f2b88ab50918802845231a334bc7aaef00782c5d
9.部署网络插件
Kubernetes 支持多种网络插件,比如 flannel、calico、canal 等,任选一种即可,本次选择 calico(在 192.168.172.101 节点上执行)
kubectl apply -f https://projectcalico.docs.tigera.io/v3.19/manifests/calico.yaml
查看部署 CNI 网络插件进度:
kubectl get pods -n kube-system
watch kubectl get pods -n kube-system
10.查看节点状态
在 Master(192.168.172.101)节点上查看节点状态:
kubectl get nodes
11.设置 kube-proxy 的 ipvs 模式
在 Master(192.168.172.101)节点设置 kube-proxy 的 ipvs 模式:
kubectl edit cm kube-proxy -n kube-system
apiVersion: v1 data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "" nodePortAddresses: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "ipvs" # 修改此处 ...
删除 kube-proxy ,让 Kubernetes 集群自动创建新的 kube-proxy :
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
12.让 Node 节点也能使用 kubectl 命令
默认情况下,只有 Master 节点才有 kubectl 命令,但是有些时候我们也希望在 Node 节点上执行 kubectl 命令:
# 192.168.172.102 和 192.168.172.103 node子节点 mkdir -pv ~/.kube touch ~/.kube/config
# 192.168.172.101 master节点 scp /etc/kubernetes/admin.conf root@192.168.172.102:~/.kube/config scp /etc/kubernetes/admin.conf root@192.168.172.103:~/.kube/config
三.Kubernetes 安装 Nginx
部署 Nginx :
kubectl create deployment nginx --image=nginx:1.14-alpine
暴露端口:
kubectl expose deployment nginx --port=80 --type=NodePort
查看服务状态:
kubectl get pods,svc
四.安装命令行自动补全功能
# 安装 yum -y install bash-completion
# 自动补全 echo 'source <(kubectl completion bash)' >>~/.bashrc kubectl completion bash >/etc/bash_completion.d/kubectl # 全局 kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null source /usr/share/bash-completion/bash_completion
到这里,k8s集群的搭建就告一段落了,我可能会很长一段时间暂停学习k8s,决定再把其他的一些基础知识再巩固一下,再见。