Kubeadm部署kubernates集群(单master+2node)
Kubeadm介绍
Kubeadm概述
Kubeadm是一个工具,它提供了kubeadm init 以及kubeadm join这两个命令作为快速创建kubernetes集群的最佳实践。
Kubeadm通过执行必要的操作来启动和运行一个最小可用的集群。Kubeadm只关心启动集群,而不关心其他工作,如部署前的节点准备工作、安装各种kubenertes Dashboard 、监控解决方案以及特定云提供商的各种插件,这些都不属于kubeadm关注范围
Kubeadm功能
Kubeadm init启动一个kubenets主节点;
Kubeadm join 启动一个kubenertes工作节点并且将其加入到集群;
Kubeadm upgrade 更新一个kubenertes 集群的到信版本;
Kubeadm config如果使用V1.7.x或者更低版本的kubeadm初始化集群,你需要对集群做一些配置以便使用kubeadm upgrade 命令
Kubeadm token 管理 kubeadm join 使用的令牌
Kubeadm reset 还原kubeadm init 或者kubeadm join 对主机所做的任何更改
Kubeadm version 打印kubeadm版本
Kubeadm alpan 预览一组可用的新功能以便从社区搜集反馈
Name ip role cpu mem
Node7 150 master 2 2
Node8 151 Node1 2 2
Node9 152 Node2 2 2
暂时以这样的config做此实验(出了问题 概不负责)
Env -----ready
3个节点,都是centos系统,内核版本:3.10.0-1062.el7.x86_64(uname -r)
vim /etc/hosts
192.168.58.150 master
192.168.58.151 node1
192.168.58.152 node2
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
生成私钥免密登录
ssh-keygen
for host in master node{1..2}
do
echo “ >>>${host}”
ssh-copy-id -i ~/.ssh/id_rsa.pub root@ ${host}
done
停止防火墙
systemctl stop firewald.service
禁用selinx
getenforce
由于开启内核ipv4转发需要加载br_netfilter
modprobe br_netfilter
创建 /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
使其生效
sysctl -p /etc/sysctl.d/k8s.conf
安装ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash
/etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules 文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了 ipset 软件包
yum install ipset
为了方便查看ipvs的代理规则 ,最好安装ipvsadm
yum install ipvsadm
同步服务器时间
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
关闭swap 分区
swapoff -a
修改 /etc/fstab文件,注释掉swap的自动挂载,使用free -m确认swap已经关闭。Swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行
vm.swappiness = 0
安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install -y docker-ce-19.03.11
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors": ["https://hsg7ghdv.mirror.aliyuncs.com"]
}
systemctl start docker
systemctl enable docker
配置kubenetes.repo
安装kubeadm kubelet kubectl
--disableexcludes=Kubernetes 禁用kubernetes之外的别的仓库
yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 --disableexcludes=Kubernetes
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
systemctl enable --now kubelet
初始化集群 在master上执行
kubeadm config print init-defaults > kubeadm.yaml
然后修改 kubeadm.yaml
[root@node2 ~]# vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.150 #节点内网ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node2
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #修改为阿里云的镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #falnnel需要这个网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式
kubeadm init --config kubeadm.yaml
若是以上的方法 比较麻烦 可以使用一下的方式 (只是建议尝试以上的方式)
kubeadm init --kubernetes-version=1.19.0 \
--apiserver-advertise-address=192.168.58.150 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
kubeadm config print init-defaults > kubeadm.yaml
如下图代表 master搞好了
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 85m v1.19.4
加入其它的节点 执行成功运行get nodes命令
kubeadm join 192.168.58.150:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:822767b4ca97cd7403902266a9c94ad9a351a668cf1abed521d3b8770e649cac
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 87m v1.19.4
node1 NotReady 25s v1.19.4
node2 NotReady 21s v1.19.4
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-d4srn 0/1 Pending 0 89m
coredns-6d56c8448f-qlrlv 0/1 Pending 0 89m
etcd-master 1/1 Running 0 90m
kube-apiserver-master 1/1 Running 0 90m
kube-controller-manager-master 1/1 Running 0 90m
kube-proxy-5f58g 1/1 Running 0 3m30s
kube-proxy-qz4sm 1/1 Running 0 3m35s
kube-proxy-swrrc 1/1 Running 0 89m
kube-scheduler-master 1/1 Running 0 90m
kubectl describe pod coredns-6d56c8448f-d4srn -n kube-system
wget
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
安装dashboard
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml
[root@master ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7b59f7d4df-968cr 1/1 Running 0 20m
kubernetes-dashboard-665f4c5ff-nsx4h 1/1 Running 0 20m
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.109.45.147 8000/TCP 24m
kubernetes-dashboard NodePort 10.102.141.248 443:30457/TCP 24m
[root@master ~]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 443/TCP 4h26m
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 4h26m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.109.45.147 8000/TCP 35m
kubernetes-dashboard kubernetes-dashboard NodePort 10.102.141.248 443:30457/TCP 35m
创建全局的用户
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
直接创建
kubectl apply -f admin.yaml
[root@master ~]# kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-9rpvn kubernetes.io/service-account-token 3 3m35s
[root@master ~]# kubectl describe secret admin-token-9rpvn -n kubernetes-dashboard
清理集群
如果你的集群过程中遇到其他的问题,我们可以使用以下的命令进行重置
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /var/lib/cni/
添加其他的master节点
kubeadm join : \
--token \
--discovery-token-ca-cert-hash sha256: \
--control-plane --certificate-key
编辑confimap
vim kubeadm.yaml
kubectl edit configmap kubeadm-config -n kube-system
添加如下:
apiServer:
certSANs:
- master
- master2
- 192.168.58.150
- 192.168.58.153
controlPlaneEndpoint: 192.168.58.153:6443
创建token
kubeadm token create --print-join-command --config kubeadm.yaml
获取token
kubeadm token list
获取hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outf
orm der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
获取—certificate-key
kubeadm init phase upload-certs --upload-certs
添加 master2节点 好像的加keepalive+proxy
kubeadm join 172.30.112.14:6443 --token abcdef.0123456789abcdef --discoverytoken-
ca-cert-hash
sha256:13c72c5df001626f62b31a57a6a03cfed32addb290cfd3ed5e48b7d12dd4adc2 --
control-plane --certificate-key
e65dd0140640a0510f30fe8bb8f49623901b9085f69a006895bd38bdc00dac89
检查状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 17h v1.19.3
k8smaster2 Ready master 11h v1.19.3
k8snode1 Ready 17h v1.19.3
k8snode2 Ready 17h v1.19.3