本文主要介绍从0开始搭建DevOps开发平台;主要涵盖了私服Gitlab、Maven、DockerHub私服、Kubernets、KubeSphere环境搭建、实践以及基于KubeSphere实现图形化配置流水线部署应用;链路跟踪功能接入以及应用容器日志统一采集及一站式搜索等功能。接下来咱们从基础工具开始,逐步揭开DevOps的神秘面纱
上图为Kubesphere的DevOps核心设计
基础工具搭建
Nexus私服搭建
这里简单处理 我们用docker的方式运行Nexus私服 实现DockerHub的功能,同时也能作为Maven私服以及Node.js的私服,便于后续流水线构建提升速度。
# 从官网拉取nexus3镜像
docker pull sonatype/nexus3
#选择一个目录创建shell脚本 这里选择/opt目录
touch start-nexus.sh
#修改脚本内容
vim start-nexus.sh
#写入下述内容 其中-v 配置目录映射 映射到宿主机目录下 数据盘挂载同上一步骤 端口映射后面会介绍
sudo chown -R 200 /data/nexus-data && sudo docker run -d -p 8081:8081 -p 8082:8082 -p 5000:5000 -p 9000:9000 --name nexus -v /data/nexus-data:/nexus-data --restart=always sonatype/nexus3
# 设置权限
chmod +x start-nexus.sh
Tips:docker 启动nexus时 需要先对200 进行授权,否则容器启动报错。
查看容器启动情况
[root@xz3qatanqfeh5z opt]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
43637e78aa62 sonatype/nexus3 "/opt/sonatype/nexus…" 21 hours ago Up 21 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp, 0.0.0.0:8081-8082->8081-8082/tcp, :::8081-8082->8081-8082/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp nexus
如果容器状态正常,登陆nexus网页 http://ip:8081/
首次登陆admin 密码默认存储于容器/nexus-data/目录下admin.password文件内;登陆后修改密码
配置docker 用户和角色
角色要注意 选择hosted* 需要配置,否则docker login 提示401 无权限
配置Hosted 仓库 用来推送自定义镜像
Maven私服搭建更简单,参照Docker配置即可。
Kibana、Elasticsearch 搭建
Kibana安装
# 拉取docker镜像
docker pull kibana:7.8.0
# 创建启动脚本
touch start-kibana.sh
docker run -d --restart=always --name kibana -p 5601:5601 -v /opt/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.8.0
# 编辑kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://10.201.0.62:9200","http://10.201.0.63:9200","http://10.201.0.64:9200","http://10.201.0.65:9200","http://10.201.0.66:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
# 执行kibana启动脚本
sh start-kibana.sh
# 查看容器信息
[root@xz3qatanqfeh5z config]# docker ps |grep 'kibana'
67ce3bfcefa0 kibana:7.8.0 "/usr/local/bin/dumb…" 5 days ago Up 5 days 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp kibana
Elasticsearch 安装
# ① 下载安装包
下载安装es 链接:https://www.elastic.co/cn/downloads/past-releases/elasticsearch-7-8-0 , 版本选择:LINUX X86_64sha
# ② 安装基础工具
# 安装jdk
yum list java-1.8*
yum install -y java-1.8.0-openjdk.x86_64
#安装telnet
yum -y install telnet-server
yum -y install telnet
#添加普通用户 es不允许用root用户运行
useradd es
chown -R es:es /usr/local/es-cluster
# ③ 系统配置
#修改/etc/security/limits.conf,结尾增加,es可换成*,代表所有linux用户
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
#修改/etc/security/limits.d/20-nproc.conf,结尾增加
es soft nofile 65536
es hard nofile 65536
* hard nproc 4096
#修改/etc/sysctl.conf ,结尾增加
fs.file-max = 655360
vm.max_map_count = 262144
执行sysctl -p 命令
#每个节点分别启动
su es
/bin/elasticsearch -d
使用插件链接测试ES集群
基于Kubenerts自身组件采集容器日志
KubeSphere打开日志功能,采集各容器内日志,外部Elasticsearch配置上述搭建的集群即可,日志检索效果如下:
KubeSphere Kubernets搭建及使用
Kubernets及KubeSphere搭建建议先搭建单节点版本;搭建成功后再添加节点配置成完整集群。对于公网访问速度受限的场景尤其适用。
挂载数据盘
查看所有的数据盘
# 查看所有磁盘分区
fdisk -l
Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00094928
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 83886046 41941999+ 83 Linux
Disk /dev/vdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 0 MB, 374784 bytes, 732 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
格式化分区
#初始化磁盘
[root@xzwkmgt7996bzf ~]# mkfs /dev/vdb
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
32768000 inodes, 131072000 blocks
6553600 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
4000 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
挂载分区
# 创建要挂载的目录
mkdir /data
# 将数据盘挂载到data目录
mount /dev/vdb /data
# 检查挂载情况
[root@xzwkmgt7996bzf ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 33M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/vda1 40G 1.9G 36G 5% /
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/vdb 493G 70M 468G 1% /data
修改Docker Kubernets数据目录
# 创建目录
mkdir /data/docker-data && mkdir /data/k8s-data
#修改Docker数据目录
vim /etc/docker/daemon.json
# 添加以下内容 配置docker 根目录
{
"graph": "/data/docker-data"
}
# 重启docker 服务
sudo systemctl daemon-reload && systemctl restart docker
# 验证docker 信息 shell中执行 docker info 查看Docker Root Dir信息是否为修改的内容
[root@xz3qatanqfeh5z opt] docker info
……
Docker Root Dir: /data/docker-data
……
# 修改Kubernets数据目录
[root@k8s-node1 /] vim /etc/sysconfig/kubelet
###上述文件如果不存在则创建,修改自定义的数据目录
KUBELET_EXTRA_ARGS="--root-dir=/data/k8s/kubelet"
2、重启kubelet
systemctl restart kubelet
3、查看kubelet数据目录
[root@master1 ~]# ps -aux|grep kubelet|grep root-dir
root 58240 7.0 0.7 2860232 116820 ? Ssl Oct14 390:35 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=kubesphere/pause:3.5 --root-dir=/data/k8s-data
修改K8S数据目录后 K8S会自动重新拉取镜像 重新调度所有Service及Deployment会花费比较长的时间,耐心等待
配置Docker 私服
上述docker私服创建后修改docker配置,用以支持向自建docker hub推送镜像以及DevOps流水线拉取镜像
#修改docker 配置文件 配置insecure registry地址
vim /etc/docker/daemon.json
# 添加加粗部分内容 配置docker 根目录
{
"graph": "/data/docker-data",
"insecure-registries":[**"10.201.0.4:8879","10.201.0.38:8082","10.201.0.38:5000","10.201.0.38:9000"]
}
# 修改完成后重启docker服务
sudo systemctl daemon-reload && systemctl restart docker
以上不同端口作用不同;其中8082端口(即docker-hosted)接受推送自定义镜像,9000(即docker-group)端口可代理docker-proxy以及docker-hosted的镜像
Kubernets创建Secret
kubectl create secret docker-registry nexus-hub-secret --docker-server=10.201.0.38:9000 --docker-username=docker --docker-password=${your password}
# 查看秘钥
[root@master1 data]# kubectl get secret
NAME TYPE DATA AGE
default-token-7r6xt kubernetes.io/service-account-token 3 28d
nexus-hub-secret kubernetes.io/dockerconfigjson 1 2d11h
此步骤比较重要,后续流水线或自定义Service Deployment需要用到此秘钥
配置图形化流水线
进行基础配置,这里重点介绍流水线(Pipeline)的配置。创建合适的用户并配置对应角色后创建DevOps项目
编写Deployment 模板
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "$BUILD_NUMBER"
generation: $BUILD_NUMBER
labels:
api.kubernetes.io/applicationName: $APP_NAME
api.kubernetes.io/compileType: maven
name: $APP_NAME
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: $POD_NUMBER
revisionHistoryLimit: 10
selector:
matchLabels:
api.kubernetes.io/applicationName: $APP_NAME
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
api.kubernetes.io/applicationName: $APP_NAME
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
image: "$REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER"
imagePullPolicy: Always
name: $APP_NAME
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "4"
memory: 4Gi
volumeMounts:
- name: time-config
mountPath: /etc/localtime
readOnly: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: time-config
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: { }
编写Service 模板-NodePort类型
apiVersion: v1
kind: Service
metadata:
labels:
api.kubernetes.io/applicationName: $APP_NAME
api.kubernetes.io/compileType: maven
name: "${APP_NAME}-service"
namespace: default
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
api.kubernetes.io/applicationName: $APP_NAME
sessionAffinity: None
type: NodePort
编写Service模板-LoadBanlancer类型
apiVersion: v1
kind: Service
metadata:
labels:
api.kubernetes.io/applicationName: $APP_NAME
api.kubernetes.io/compileType: maven
name: "${APP_NAME}-lb-service"
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
api.kubernetes.io/applicationName: $APP_NAME
sessionAffinity: None
type: LoadBalancer
Tips:Kubernets暂时无官方组件;Loadbalance 模式需要先安装MetalLB 参考此链接
流水线模板
可基于此模板快速导入编辑,也可在图形界面进行编辑各步骤。
pipeline {
agent {
node {
label 'maven'
}
}
stages {
stage('Clone repository') {
steps {
git(url: 'http://11.2.16.2:82/testgroup/helloworld.git', credentialsId: 'gitcert', branch: 'master', changelog: true, poll: false)
}
}
stage('Build And Push') {
agent none
steps {
container('maven') {
sh 'curl -o `pwd`/settings.xml http://11.2.16.2:88/settings.xml && mkdir `pwd`/deploy && curl -o `pwd`/deploy/deployment.yaml http://11.2.16.2:88/deploy/deployment.yaml && curl -o `pwd`/deploy/service-loadbalancer.yaml http://11.2.16.2:88/deploy/service-loadbalancer.yaml'
sh 'mvn -Dmaven.test.skip=true -gs `pwd`/settings.xml clean package -U'
sh 'docker build -t $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER .'
withCredentials([usernamePassword(credentialsId : 'nexus-docker-hub' ,passwordVariable : 'DOCKER_PASSWORD' ,usernameVariable : 'DOCKER_USERNAME' ,)]) {
sh 'echo "$DOCKER_PASSWORD" | docker login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BUILD_NUMBER'
}
}
}
}
stage('Artifacts') {
agent none
steps {
archiveArtifacts 'target/*.jar'
}
}
stage('Deploy') {
agent none
steps {
container('maven') {
withCredentials([kubeconfigContent(credentialsId : 'kubeconfig' ,variable : 'KUBECONFIG_CONTENT' ,)]) {
sh '''mkdir ~/.kube
echo "$KUBECONFIG_CONTENT" > ~/.kube/config
envsubst < `pwd`/deploy/deployment.yaml | kubectl apply -f -
envsubst < `pwd`/deploy/service-loadbalancer.yaml | kubectl apply -f -'''
}
}
}
}
}
}
图形界面示例:
查看构建结果
Docker私服镜像
Kubernets内查看服务信息
集成Skywalking 采集调用链
前期准备
# 推送基础镜像至私服
[root@master1 ~]# docker images | grep skywalking
10.201.0.38:9000/skywalking8.7-jdk8 1.0.0 93bd2ac61b89 4 days ago 852MB
10.201.0.38:8082/apache/skywalking-oap-server 8.7.0-es7 f99a0616ac7a 14 months ago 530MB
apache/skywalking-oap-server 8.7.0-es7 f99a0616ac7a 14 months ago 530MB
docker tag f99a0616ac7a 10.201.0.38:8082/apache/skywalking-oap-server
# 推送之前先登陆docker私服
docker push 10.201.0.38:8082/apache/skywalking-oap-server
配置服务
# Skywalking OAP Server配置文件
apiVersion: v1
kind: Service
metadata:
labels:
app: skywalking-oap-server-service
name: skywalking-oap-server
namespace: skywalking
spec:
ports:
- name: grpc
port: 11800
protocol: TCP
targetPort: 11800
- name: tcp
port: 12800
protocol: TCP
targetPort: 12800
selector:
app: skywalking-oap-server
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: skywalking-oap-server
name: skywalking-oap-server
namespace: skywalking
spec:
replicas: 1
selector:
matchLabels:
app: skywalking-oap-server
template:
metadata:
labels:
app: skywalking-oap-server
spec:
imagePullSecrets:
- name: nexus-hub-secret
containers:
- env:
- name: SW_STORAGE
value: elasticsearch7
- name: SW_STORAGE_ES_CLUSTER_NODES
value: 10.201.0.62:9200
- name: TZ
value: GMT+8
# image: apache/skywalking-oap-server:8.1.0-es6
image: 10.201.0.38:9000/apache/skywalking-oap-server:8.7.0-es7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 2
successThreshold: 1
tcpSocket:
port: 12800
timeoutSeconds: 2
name: skywalking-oap-server
ports:
- containerPort: 11800
name: grpc
protocol: TCP
- containerPort: 12800
name: tcp
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 2
successThreshold: 2
tcpSocket:
port: 12800
timeoutSeconds: 2
resources:
limits:
cpu: "4"
memory: 6Gi
requests:
cpu: "4"
memory: 4Gi
# 编辑Skywalking-UI 配置文件
apiVersion: v1
kind: Service
metadata:
labels:
app: skywalking-ui-service
name: skywalking-ui
namespace: skywalking
spec:
ports:
- name: tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: skywalking-ui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: skywalking-ui
name: skywalking-ui
namespace: skywalking
spec:
replicas: 1
selector:
matchLabels:
app: skywalking-ui
template:
metadata:
labels:
app: skywalking-ui
spec:
containers:
- env:
- name: SW_OAP_ADDRESS
value: http://skywalking-oap-server:12800
- name: TZ
value: GMT+8
image: apache/skywalking-ui:8.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 2
successThreshold: 1
tcpSocket:
port: 8080
timeoutSeconds: 2
name: skywalking-ui
ports:
- containerPort: 8080
name: tcp
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 2
successThreshold: 2
tcpSocket:
port: 8080
timeoutSeconds: 2
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "2"
memory: 2Gi
使用kubectl 分别发布上述两个服务
# 发布Skywalking server服务
kubectl apply -f skywalking-server.yaml
# 发布Skywalking ui服务
kubectl apply -f skywalking-ui.yaml
Agent接入
# 编写基础 agent镜像
# ① 下载对应版本agengt 包至特定目录,此过程忽略
# ② 创建dockerfile,输入以下内容
FROM centos:7
# 安装打包必备软件
RUN yum install -y wget unzip telnet lsof net-tools bind-utils
# 准备 JDK/Tomcat 系统变量
ENV JAVA_HOME /usr/lib/jvm/java
ENV PATH $PATH:$JAVA_HOME/bin
ENV CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV ADMIN_HOME /home/admin
# 创建目录
RUN mkdir -p ${ADMIN_HOME}
# 下载安装 OpenJDK
RUN yum -y install java-1.8.0-openjdk-devel
RUN mkdir -p /home/admin/app/ && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
# 拷贝jar,修改成自己项目的jar包,路径相对根目录
COPY agent $ADMIN_HOME/agent
# ③ 构建镜像 推送只私服
docker build -t 11.2.16.53:8082/skywalking8.7-jdk8:1.0.0
# 推送前需要登陆docker 私服
docker push 11.2.16.53:8082/skywalking8.7-jdk8:1.0.0
基于编写好的基础镜像,编写应用接入Dockerfile
# 私服的另外一个ip及端口 详细说明见上述私服篇
FROM 10.201.0.38:9000/skywalking8.7-jdk8:1.0.0
ENV ADMIN_HOME /home/admin
# 这里以最简单的helloworld应用为例
ARG APP_JAR=helloworld.jar
ARG EXECUTABLE_PATH=${ADMIN_HOME}/app
COPY target/${APP_JAR} ${EXECUTABLE_PATH}/${APP_JAR}
# 增加容器内中⽂支持
ENV LANG="en_US.UTF-8"
# 增强 Webshell 使⽤体验
ENV TERM=xterm
# 这里的agengt路径即为上出基础agent镜像内的内容,无需在代码工程内重复引用
RUN echo 'eval java -javaagent:/home/admin/agent/skywalking-agent.jar -Dskywalking.agent.service_name=helloworld -Dskywalking.collector.backend_service=skywalking-oap-server.skywalking:11800 -jar /home/admin/app/helloworld.jar'> /home/admin/start.sh && chmod +x /home/admin/start.sh
WORKDIR ${ADMIN_HOME}
CMD ["/bin/bash", "/home/admin/start.sh"]
查看接入情况
配置HPA (AutoScale)
# 对需要水平扩容的服务添加如下规则,这里表示 cpu超过40%时 开始弹性扩容,最多10个replica 最少2个replica
kubectl autoscale deployment helloworld --cpu-percent=40 --min=2 --max=10
总结
成就达成
基于本文章,我们目前已达成下述成就:
- 从0 搭建Kubernets集群
- 从0 搭建Docker、Maven私服
- 从0 搭建Elasticsearch集群,从0搭建Kibana 实现容器日志检索
- 快速实现服务接入Skywalking实现链路采集
- 从0搭建DevOps体系,图形化配置项目流水线实现SourceCode To Image以及SourceCode To Service
TODO
- 基于Ingress实现服务统一接入,图形化配置服务转发规则以及Https接入
- 基于Kibana FileBeat采集其他日志以及Log Rotate
- 基于KubeSphere实现集群监控及告警
See You!!