云原生|kubernetes|minikube的部署安装完全手册(修订版)(一)

简介: 云原生|kubernetes|minikube的部署安装完全手册(修订版)

前言:


学习一个新平台首先当然是能够有这么一个平台了,而kubernetes的部署安装无疑是提高了这一学习的门槛,不管是二进制安装还是kubeadm安装都还是需要比较多的运维技巧的,并且在搭建学习的时候,需要的硬件资源也是比较多的,至少都需要三台或者三台以上的服务器才能够完成部署安装。

那么,kind或者minikube这样的工具就是一个能够快速的搭建出学习平台的工具,特点是简单,易用,一台服务器的资源就可以搞定了(只是单机需要的内存大一些,建议至少8G内存吧),自动化程度高,基本什么都给你设置好了,并且支持多种虚拟化引擎,比如,docker,container,kvm这些常用的虚拟化引擎都支持。缺点是基本没有定制化。

minikube支持的虚拟化引擎:

image.png

 好了,本教程大部分资料都是从官网的docs里扒的,docs的网址是:Welcome! | minikube

相关安装部署文件(conntrack.tar.gz解压后,rpm -ivh * 安装就可以了,是相关依赖,minikube-images.tar.gz是镜像包,解压后倒入docker,三个可执行文件放入/root/.minikube/cache/linux/amd64/v1.18.8/目录下即可。):

链接:https://pan.baidu.com/s/14-r59VfpZRpfiVGj4IadxA?pwd=k8ss

提取码:k8ss  

一,get started minikube(开始部署minkube)


安装部署前的先决条件


至少两个CPU,2G内存,20G空余磁盘空间,可访问互联网,有一个虚拟化引擎, Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation其中的一个,那么,docker是比较容易安装的,就不说了,docker吧,操作系统是centos。

What you’ll need
2 CPUs or more
2GB of free memory
20GB of free disk space
Internet connection
Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation

docker的离线安装以及本地化配置_zsk_john的博客-CSDN博客离线安装docker环境的博文,照此做就可以了,请确保docker环境是安装好的。

docker版本至少是18.09到20.10


image.png

二,开始安装


下载minikube的执行程序


curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

三,导入镜像


由于minikube在安装kubernetes的时候使用的镜像是从外网拉取的,国内由于被墙是无法拉取的,因此,制作了这个离线镜像包。

[root@slave3 ~]# tar zxf minikube-images.tar.gz 
[root@slave3 ~]# cd minikube-images
[root@slave3 minikube-images]# for i in `ls ./*`;do docker load <$i;done
dfccba63d0cc: Loading layer [==================================================>]  80.82MB/80.82MB
Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
c965b38a6629: Loading layer [==================================================>]  43.58MB/43.58MB
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。略略略

四,初始化kubernetes集群的命令:


这里大概介绍一下,image-repostory是使用阿里云下载镜像,cni是指定网络插件就用flannel,如果不想用这个删掉这行就可以了,其它没什么要注意的。

minikube config set driver none
minikube start pod-network-cidr='10.244.0.0/16'\
    --extra-config=kubelet.pod-cidr=10.244.0.0/16 \
    --network-plugin=cni \
    --image-repository='registry.aliyuncs.com/google_containers' \
    --cni=flannel \
    --apiserver-ips=192.168.217.23 \
    --kubernetes-version=1.18.8 \
    --vm-driver=none

启动集群的日志


[root@slave3 conntrack]# minikube start --driver=none --kubernetes-version=1.18.8
* minikube v1.26.1 on Centos 7.4.1708
* Using the none driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=4, Memory=7983MB, Disk=51175MB) ...
* OS release is CentOS Linux 7 (Core)
E0911 11:23:25.121495   14039 docker.go:148] "Failed to enable" err=<
  sudo systemctl enable docker.socket: exit status 1
  stdout:
  stderr:
  Failed to execute operation: No such file or directory
 > service="docker.socket"
! This bare metal machine is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
    > kubectl.sha256:  65 B / 65 B [-------------------------] 100.00% ? p/s 0s
    > kubelet:  108.05 MiB / 108.05 MiB [--------] 100.00% 639.49 KiB p/s 2m53s                                                                                                                             
  - Generating certificates and keys ...
  - Booting up control plane ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.8:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [slave3 localhost] and IPs [192.168.217.136 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
  Unfortunately, an error has occurred:
    timed out waiting for the condition
  This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'
  Additionally, a control plane component may have crashed or exited when started by the container runtime.
  To troubleshoot, list all containers using your preferred container runtimes CLI.
  Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
stderr:
W0911 11:26:38.783101   14450 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  [WARNING Swap]: running with swap on is not supported. Please disable swap
  [WARNING FileExisting-socat]: socat not found in system path
  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0911 11:26:48.464749   14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0911 11:26:48.466754   14450 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring local host environment ...
* 
! The 'none' driver is designed for experts who need to integrate with an existing VM
* Most users should use the newer 'docker' driver instead, which does not require root!
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* 
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
* 
  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube
* 
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

minikube的停止和删除:

如果这个集群想停止的话,那么命令就非常简单了:

minkube stop
输出如下;
* Stopping "minikube" in none ...
* Node "minikube" stopped.

如果重启了服务器,那么,只需要参数换成start就可以再次启动minikube了。删除minkube也非常简单,参数换成 delete即可,这个删除会将配置文件什么的都给删除掉,前提这些文件是minikube自己建立的,否则不会删除。

start的输出:

[root@node3 manifests]# minikube start
* minikube v1.12.0 on Centos 7.4.1708
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.18.8 on Docker 19.03.9 ...
* Configuring local host environment ...
* 
! The 'none' driver is designed for experts who need to integrate with an existing VM
* Most users should use the newer 'docker' driver instead, which does not require root!
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* 
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
* 
  - sudo mv /root/.kube /root/.minikube $HOME
  - sudo chown -R $USER $HOME/.kube $HOME/.minikube
* 
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"

以上的输出表明kubernetes单节点集群已经安装成功了,但有一些警告需要处理:

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
14天前
|
存储 Kubernetes 对象存储
部署DeepSeek但GPU不足,ACK One注册集群助力解决IDC GPU资源不足
借助阿里云ACK One注册集群,充分利用阿里云强大ACS GPU算力,实现DeepSeek推理模型高效部署。
|
16天前
|
Cloud Native Serverless 数据中心
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
ACK One注册集群已正式支持ACS(容器计算服务)算力,为企业的容器化工作负载提供更多选择和更强大的计算能力。
|
6天前
|
边缘计算 调度 对象存储
部署DeepSeek但IDC GPU不足,阿里云ACK Edge虚拟节点来帮忙
介绍如何使用ACK Edge与虚拟节点满足DeepSeek部署的弹性需求。
|
19天前
|
存储 Kubernetes 测试技术
企业级LLM推理部署新范式:基于ACK的DeepSeek蒸馏模型生产环境落地指南
本教程演示如何在ACK中使用vLLM框架快速部署DeepSeek R1模型推理服务。
|
20天前
|
存储 人工智能 弹性计算
NVIDIA NIM on ACK:优化生成式AI模型的部署与管理
本文结合NVIDIA NIM和阿里云容器服务,提出了基于ACK的完整服务化管理方案,用于优化生成式AI模型的部署和管理。
|
9天前
|
Kubernetes 持续交付 数据库
阿里云ACK+GitLab企业级部署实战教程
GitLab 是一个功能强大的基于 Web 的 DevOps 生命周期平台,整合了源代码管理、持续集成/持续部署(CI/CD)、项目管理等多种工具。其一体化设计使得开发团队能够在同一平台上进行代码协作、自动化构建与部署及全面的项目监控,极大提升了开发效率和项目透明度。 GitLab 的优势在于其作为一体化平台减少了工具切换,高度可定制以满足不同项目需求,并拥有活跃的开源社区和企业级功能,如高级权限管理和专业的技术支持。借助这些优势,GitLab 成为许多开发团队首选的 DevOps 工具,实现从代码编写到生产部署的全流程自动化和优化。
|
14天前
|
人工智能 Kubernetes 异构计算
大道至简-基于ACK的Deepseek满血版分布式推理部署实战
本教程演示如何在ACK中多机分布式部署DeepSeek R1满血版。
|
2月前
|
存储 Kubernetes 开发者
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
Docker 是一种开源的应用容器引擎,允许开发者将应用程序及其依赖打包成可移植的镜像,并在任何支持 Docker 的平台上运行。其核心概念包括镜像、容器和仓库。镜像是只读的文件系统,容器是镜像的运行实例,仓库用于存储和分发镜像。Kubernetes(k8s)则是容器集群管理系统,提供自动化部署、扩展和维护等功能,支持服务发现、负载均衡、自动伸缩等特性。两者结合使用,可以实现高效的容器化应用管理和运维。Docker 主要用于单主机上的容器管理,而 Kubernetes 则专注于跨多主机的容器编排与调度。尽管 k8s 逐渐减少了对 Docker 作为容器运行时的支持,但 Doc
193 5
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
|
2月前
|
人工智能 缓存 异构计算
云原生AI加速生成式人工智能应用的部署构建
本文探讨了云原生技术背景下,尤其是Kubernetes和容器技术的发展,对模型推理服务带来的挑战与优化策略。文中详细介绍了Knative的弹性扩展机制,包括HPA和CronHPA,以及针对传统弹性扩展“滞后”问题提出的AHPA(高级弹性预测)。此外,文章重点介绍了Fluid项目,它通过分布式缓存优化了模型加载的I/O操作,显著缩短了推理服务的冷启动时间,特别是在处理大规模并发请求时表现出色。通过实际案例,展示了Fluid在vLLM和Qwen模型推理中的应用效果,证明了其在提高模型推理效率和响应速度方面的优势。
云原生AI加速生成式人工智能应用的部署构建
|
2月前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。

热门文章

最新文章