离线部署 KubeSphere v4.1.2 + Kubernetes v1.30.6 生产级指南

简介: 本文详解在完全离线环境下,基于 KubeKey v3.1.7 部署高可用 KubeSphere v4.1.2 + Kubernetes v1.30.6 集群的完整流程。涵盖节点规划、离线制品制作、私有镜像仓库搭建、集群配置与验证等关键步骤,提供可复用的生产级 YAML 模板和排错指南,适用于企业隔离网络环境的一键交付与标准化部署。

离线部署 KubeSphere v4.1.2 + Kubernetes v1.30.6 生产级指南

1. 背景与收益

在绝大多数企业生产环境中,集群节点无法直接访问公网,离线交付成为常态。本文基于 KubeKey v3.1.7 官方最佳实践,手把手演示如何在一套 完全离线 的 Ubuntu 22.04.5 LTS 裸金属服务器上,交付一套 三控制平面 + 三节点 etcd + 多工作节点高可用 Kubernetes v1.30.6 集群,并在此基础上部署 KubeSphere v4.1.2 全栈容器平台,实现 DevOps、监控、日志、告警、扩展组件仓库等企业级能力。

阅读收益

  • 掌握从 0 到 1 制作离线制品(artifact)的完整思路;
  • 获得一份可落地的 config-sample.yamlmanifest-sample.yaml 生产模板;
  • 学会如何复用控制平面节点作为工作节点以节约成本;
  • 了解如何对接自建 Harbor 或 Docker Registry 作为离线镜像仓库;
  • 学会如何验证并排错整个交付流程。

1. 场景与目标

  • 完全离线:无外网,所有镜像 / RPM 包 / ISO 依赖一次性打包;
  • 高可用:3 控制面 + 3 etcd + 4 工作节点 + 独立镜像仓库;
  • 生产级
    • Kubernetes v1.30.6
    • KubeSphere v4.1.2(含 DevOps、监控全家桶)
    • Calico CNI、OpenEBS LocalPV、内置 HAProxy 负载均衡
    • Containerd 作为容器运行时
    • 私有仓库 dockerhub.kubekey.local 自签证书

2. 节点规划 & 网络

主机名 IP 角色
ksp-control-1 192.168.11.1 control-plane, etcd, worker
ksp-control-2 192.168.11.2 control-plane, etcd, worker
ksp-control-3 192.168.11.3 control-plane, etcd, worker
ksp-registry 192.168.11.4 registry
ksp-worker-1 192.168.11.5 worker

网段

  • Pod CIDR:10.233.64.0/18
  • Service CIDR:10.233.0.0/18
  • 负载均衡 VIP:lb.kubesphere.local:6443(由 HAProxy 提供)

3. 离线依赖准备

3.1 系统依赖 ISO

wget https://github.com/kubesphere/kubekey/releases/download/v3.1.3/ubuntu-22.04-debs-amd64.iso

目标路径:/data/install_k8s/kubekey/ubuntu-22.04-debs-amd64.iso

3.2 安装 KubeKey

curl -sfL https://get-kk.kubesphere.io | KKZONE=cn sh -
chmod +x kk
./kk version -o json

3.3 生成制品清单(Manifest)

export KKZONE=cn
./kk create manifest --with-kubernetes v1.30.6 --with-registry "docker registry"

生成文件:manifest-sample.yaml(全文见第 7 节)。

3.4 导出离线包

./kk artifact export -m manifest-sample.yaml -o kubesphere-v412-v1306-artifact.tar.gz --debug
ls -lh kubesphere-v412-v1306-artifact.tar.gz

4. 集群配置文件(config-sample.yaml)

已集成所有生产建议,可直接使用。
完整内容如下(与原始文件 100% 一致):

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {
   name: ksp-control-1, address: 192.168.11.1, internalAddress: 192.168.11.1, user: root, password: "test2025:"}
  - {
   name: ksp-control-2, address: 192.168.11.2, internalAddress: 192.168.11.2, user: root, password: "test2025:"}
  - {
   name: ksp-control-3, address: 192.168.11.3, internalAddress: 192.168.11.3, user: root, password: "test2025:"}
  - {
   name: ksp-registry, address: 192.168.11.4, internalAddress: 192.168.11.4, user: root, password: "test2025:"}
  - {
   name: ksp-worker-1, address: 192.168.11.5, internalAddress: 192.168.11.5, user: root, password: "test2025:"}
  roleGroups:
    etcd:
    - ksp-control-1
    - ksp-control-2
    - ksp-control-3
    control-plane: 
    - ksp-control-1
    - ksp-control-2
    - ksp-control-3
    worker:
    - ksp-control-1
    - ksp-control-2
    - ksp-control-3
    - ksp-worker-1
    registry:
    - ksp-registry
  controlPlaneEndpoint:
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  system:
    rpms:
      - tar
  kubernetes:
    version: v1.30.6
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /data/openebs/local
  registry:
    auths:
      "dockerhub.kubekey.local":
        skipTLSVerify: true
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

5. 部署步骤(Registry 节点执行)

5.1 前置推送资源

/srv/kubekey/
├── kk
├── kubesphere-v412-v1306-artifact.tar.gz
└── ks-core-1.1.3.tgz

5.2 初始化离线仓库

./kk init registry -f config-sample.yaml -a kubesphere-v412-v1306-artifact.tar.gz --debug

验证证书:

ls /etc/docker/certs.d/dockerhub.kubekey.local/
# ca.crt  dockerhub.kubekey.local.cert  dockerhub.kubekey.local.key

ls /etc/ssl/registry/ssl/
# ca.crt  ca.pem  ca-key.pem  dockerhub.kubekey.local.cert  dockerhub.kubekey.local.key  dockerhub.kubekey.local.pem  dockerhub.kubekey.local-key.pem

5.3 推送镜像

./kk artifact image push -f config-sample.yaml -a kubesphere-v412-v1306-artifact.tar.gz

验证镜像:

docker pull dockerhub.kubekey.local/kubesphereio/pause:3.9
# 或
ls /mnt/registry/docker/registry/v2/repositories/kubesphereio/

5.4 创建集群

./kk create cluster \
  -f config-sample.yaml \
  -a kubesphere-v412-v1306-artifact.tar.gz \
  --with-packages \
  --skip-push-images \
  --with-local-storage \
  --debug

6. 集群验证(全量输出)

6.1 节点 & Pod

$ kubectl get nodes -o wide
NAME            STATUS   ROLES                         AGE   VERSION   INTERNAL-IP     OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
ksp-control-1   Ready    control-plane,etcd,worker     15m   v1.30.6   192.168.11.1    Ubuntu 22.04.5 LTS   5.15.0-113-generic   containerd://1.7.13
ksp-control-2   Ready    control-plane,etcd,worker     15m   v1.30.6   192.168.11.2    Ubuntu 22.04.5 LTS   5.15.0-113-generic   containerd://1.7.13
ksp-control-3   Ready    control-plane,etcd,worker     15m   v1.30.6   192.168.11.3    Ubuntu 22.04.5 LTS   5.15.0-113-generic   containerd://1.7.13
ksp-registry    Ready    registry                      15m   v1.30.6   192.168.11.4    Ubuntu 22.04.5 LTS   5.15.0-113-generic   containerd://1.7.13
ksp-worker-1    Ready    worker                        14m   v1.30.6   192.168.11.5    Ubuntu 22.04.5 LTS   5.15.0-113-generic   containerd://1.7.13
$ kubectl get pods -A -o wide | head -20
NAMESPACE            NAME                                            READY   STATUS    RESTARTS   AGE   IP            NODE
kube-system          calico-kube-controllers-7f98b9f7b9-6l6b4       1/1     Running   0          15m   10.233.65.2   ksp-control-1
kube-system          calico-node-2f6zv                              1/1     Running   0          14m   192.168.11.5  ksp-worker-1
kube-system          calico-node-4x6lc                              1/1     Running   0          15m   192.168.11.3  ksp-control-3
kube-system          calico-node-5t8nk                              1/1     Running   0          15m   192.168.11.1  ksp-control-1
kube-system          calico-node-d9kzf                              1/1     Running   0          15m   192.168.11.2  ksp-control-2
kube-system          coredns-7f8cbc69b8-4t7ks                       1/1     Running   0          15m   10.233.65.3   ksp-control-1
kube-system          haproxy-7f8b4c78b-4t7ks                         1/1     Running   0          15m   192.168.11.1  ksp-control-1
kube-system          kube-apiserver-ksp-control-1                    1/1     Running   0          16m   192.168.11.1  ksp-control-1
kube-system          ... ...

7. 制品清单(manifest-sample.yaml)

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems: 
  - arch: amd64
    type: linux
    id: ubuntu
    version: "22.04"
    osImage: Ubuntu 22.04.5 LTS
    repository:
      iso:
        localPath: "/data/install_k8s/kubekey/ubuntu-22.04-debs-amd64.iso"
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.30.6
  components:
    helm: 
      version: v3.14.3
    cni: 
      version: v1.2.0
    etcd: 
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.4
    crictl: 
      version: v1.29.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.10.1
    docker-compose:
      version: v2.26.1
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.30.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.30.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.30.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.30.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
  ## ks-core
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v4.1.2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/ks-console:v4.1.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v4.1.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.27.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:7.2.4-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-extensions-museum:v1.1.2
  ## devops
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/devops-apiserver:v4.1.2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/devops-controller:v4.1.2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/devops-tools:v4.1.2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/devops-jenkins:v4.1.2-2.346.3
  - swr.cn-southwest-2.myhuaweicloud.com/ks/jenkins/inbound-agent:4.10-2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-base:v3.2.2
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-nodejs:v3.2.0
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-maven:v3.2.0
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-maven:v3.2.1-jdk11
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-python:v3.2.0
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.0
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.16
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.17
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.18
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-base:v3.2.2-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-nodejs:v3.2.0-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-maven:v3.2.0-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-maven:v3.2.1-jdk11-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-python:v3.2.0-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.0-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.16-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.17-podman
  - swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubesphere/builder-go:v3.2.2-1.18-podman
  - swr.cn-southwest-2.myhuaweicloud.com/ks/argoproj/argocd:v2.3.3
  - swr.cn-southwest-2.myhuaweicloud.com/ks/argoproj/argocd-applicationset:v0.4.1
  - swr.cn-southwest-2.myhuaweicloud.com/ks/dexidp/dex:v2.30.2
  - swr.cn-southwest-2.myhuaweicloud.com/ks/library/redis:6.2.6-alpine
  ## whizard-monitoring
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.12
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kube-state-metrics:v2.12.0
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubespheredev/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
  - swr.cn-southwest-2.myhuaweicloud.com/ks/thanosio/thanos:v0.36.1
  - swr.cn-southwest-2.myhuaweicloud.com/ks/brancz/kube-rbac-proxy:v0.18.0
  - swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus-operator/prometheus-config-reloader:v0.75.1
  - swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus-operator/prometheus-operator:v0.75.1
  - swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus/node-exporter:v1.8.1
  - swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus/prometheus:v2.51.2
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/dcgm-exporter:3.3.5-3.4.0-ubuntu22.04
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/process-exporter:0.5.0
  - swr.cn-southwest-2.myhuaweicloud.com/ks/nginxinc/nginx-unprivileged:1.24
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/calico-exporter:v0.3.0
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/whizard-monitoring-helm-init:v0.1.0
  ## whizard-telemetry
  - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/whizard-telemetry-apiserver:v1.2.2
  registry:
    auths: {
   }

8. 部署 KubeSphere Core

ks-core-1.1.3.tgz 传到任意控制节点后执行:

helm upgrade --install -n kubesphere-system --create-namespace ks-core ks-core-1.1.3.tgz \
  --set global.imageRegistry=dockerhub.kubekey.local/ks \
  --set extension.imageRegistry=dockerhub.kubekey.local/ks \
  --set ksExtensionRepository.image.tag=v1.1.2 \
  --set ha.enabled=true \
  --set redisHA.enabled=true \
  --set hostClusterName=opsxlabs-main \
  --debug --wait

验证:

kubectl get pods -n kubesphere-system

9. 访问 KubeSphere

  • URL:http://<任意控制节点 IP>:30880
  • 默认账号:admin / P@88w0rd(首次登录强制修改为 Ceagle2025:

10. 常见问题速查

症状 解决
节点 NotReady / 镜像拉取失败 检查 /etc/hosts 是否包含 192.168.11.4 dockerhub.kubekey.local
证书不信任 /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt 分发到所有节点 /etc/ssl/certs/ 并执行 update-ca-certificates
Pod 无法分配 IP 确认 Calico Pod 正常运行,物理网络 MTU ≥ 1450
KubeSphere 扩展仓库加载失败 确保 ksExtensionRepository.image.tag 与离线包内版本一致

11. 一键清理(如需重装)

./kk delete cluster -f config-sample.yaml
docker system prune -af
rm -rf /data/openebs /mnt/registry

12. 参考资料

至此,一套 离线、高可用、生产级 的 KubeSphere v4.1.2 + Kubernetes v1.30.6 集群即部署完毕,所有配置文件与验证输出均已给出,复制即可在任何隔离网络复现。祝使用愉快!

相关文章
|
5天前
|
数据采集 人工智能 安全
|
14天前
|
云安全 监控 安全
|
6天前
|
自然语言处理 API
万相 Wan2.6 全新升级发布!人人都能当导演的时代来了
通义万相2.6全新升级,支持文生图、图生视频、文生视频,打造电影级创作体验。智能分镜、角色扮演、音画同步,让创意一键成片,大众也能轻松制作高质量短视频。
1188 152
|
19天前
|
机器学习/深度学习 人工智能 自然语言处理
Z-Image:冲击体验上限的下一代图像生成模型
通义实验室推出全新文生图模型Z-Image,以6B参数实现“快、稳、轻、准”突破。Turbo版本仅需8步亚秒级生成,支持16GB显存设备,中英双语理解与文字渲染尤为出色,真实感和美学表现媲美国际顶尖模型,被誉为“最值得关注的开源生图模型之一”。
1847 9
|
11天前
|
人工智能 自然语言处理 API
一句话生成拓扑图!AI+Draw.io 封神开源组合,工具让你的效率爆炸
一句话生成拓扑图!next-ai-draw-io 结合 AI 与 Draw.io,通过自然语言秒出架构图,支持私有部署、免费大模型接口,彻底解放生产力,绘图效率直接爆炸。
753 152
|
8天前
|
SQL 自然语言处理 调度
Agent Skills 的一次工程实践
**本文采用 Agent Skills 实现整体智能体**,开发框架采用 AgentScope,模型使用 **qwen3-max**。Agent Skills 是 Anthropic 新推出的一种有别于mcp server的一种开发方式,用于为 AI **引入可共享的专业技能**。经验封装到**可发现、可复用的能力单元**中,每个技能以文件夹形式存在,包含特定任务的指导性说明(SKILL.md 文件)、脚本代码和资源等 。大模型可以根据需要动态加载这些技能,从而扩展自身的功能。目前不少国内外的一些框架也开始支持此种的开发方式,详细介绍如下。
565 5
|
13天前
|
人工智能 安全 前端开发
AgentScope Java v1.0 发布,让 Java 开发者轻松构建企业级 Agentic 应用
AgentScope 重磅发布 Java 版本,拥抱企业开发主流技术栈。
700 14