kubernetes
1 概述
Kubernetes是一款跨节点的容器编排工具,一般情况下都是以集群的方式运行。
官网介绍
Kubernetes也称为K8s,是用于自动部署、扩缩和管理容器化应用程序的开源系统。
它将组成应用程序的容器组合成逻辑单元,以便于管理和服务发现。Kubernetes源自Google 15 年生产环境的运维经验,同时凝聚了社区的最佳创意和实践。
Google每周运行数十亿个容器,Kubernetes 基于与之相同的原则来设计,能够在不扩张运维团队的情况下进行规模扩展。
Kubernetes是一款跨节点的容器编排工具,一般情况下都是以集群的方式运行。
官网介绍
Kubernetes也称为K8s,是用于自动部署、扩缩和管理容器化应用程序的开源系统。
它将组成应用程序的容器组合成逻辑单元,以便于管理和服务发现。Kubernetes源自Google 15 年生产环境的运维经验,同时凝聚了社区的最佳创意和实践。
Google每周运行数十亿个容器,Kubernetes 基于与之相同的原则来设计,能够在不扩张运维团队的情况下进行规模扩展。
2 简史
– K8S简史
– 2013年docker开源,IT界的福音,备受关注
– 2014.06 Google有15年的容器编排Borg(博格,商业产品)使用经验,并将K8S(基于borg系统使用go语言研发)底层基于docker作为容器运行时开源
– 2014.12 docker inc公司推出了K8S竞品,docker swarm
– Google kubernets vs docker inc swarm 【3年对抗赛】 2017年年底结束,k8s完胜。(k8s 72% vs swarm %13)
– 2014 coreOS 公司推出了rkt容器管理工具并站队K8S
– 2015 Google公司将K8S开源并贡献给了CNCF组织,成为该组织的第一个毕业项目。
– 2015 docker inc公司推出了OCI提议,主要针对容器运行时和镜像规范,并开源了runc。
– 2016 Google推出了CRI规范,目前市面上没有任何产品可以满足,于是就开源了docker-shim组件(会调用docker接口并满足cri规范)支持CRI接口;
– 2016,RedHat公司基于cri-o(既符合CRI也符合OCI规范)开发框架让rkt容器管理工具支持CRI接口;
– 2017,docker inc公司将containerd从docker engine剥离,并将containerd开源给了CNCF组织,
– containerd底层调用runc,因此该产品是支持OCI提议的;
– containerd组件本身不支持CRI,因此社区大佬们(包含国内外)集体开发cri-containerd组件,最后合并到containerd项目
– 2018 年国内开始流程K8S,各大云厂商已经开始大规模使用K8S集群,
– 阿里云的ACK的SAAS产品
– 腾讯云的TKE的SAAS产品
– 华为云的CCE的SAAS产品
– ucloud的UK8S的SAAS产品
– 亚马逊的Amazon EKS的SAAS产品
– 京东云,百度云等
– 2018年,coreOS公司被Redhat以2.5亿美元收购。
– 2018年10月29日,IBM宣布以340亿美元的价格收购Red Hat。
– 曾经一度,Docker方面的炒作非常猛。
– Docker从Greylock Partners、Insight Partners和红杉资本等大牌投资者处筹资超过2.7亿美元,
– 2018年估值达到最高峰:13.2亿美元。
– 2019年2月,Docker一分为二,将企业业务出售给了云咨询公司Mirantis(对于OpenStack代码贡献量非常大,能排到前3)。
– 2020年,Google宣布K8S将在1.22+版本后弃用docker容器运行时,当时年底发布的最新版是1.20.X;
– 2020年3月11日公布的,当时Docker宣布被云计算软件巨头微软(Microsoft)以 6.7亿美元收购。
– 2021年底 K8S 1.23的RC版本发布;
– 2022年初,K8S 1.24横空出世,直接将docker-shim组件移除,而是使用containerd作为容器运行时;
– 2023年初,K8S 1.27.X发布;
– 2023年3月,K8S 1.23.17 发布了最后一个支持docker-shim的版本。
– docker和Mirantis公司作为合作伙伴,将维护该项目,运维小伙伴如果需要在K8S 1.24及以后的版本使用docker的话,需要单独cri-docker组件。
– 2024年初,K8S 1.30.x版本发布
– 2024年12月,K8S 1.32.x版本发布
– 2025年初,K8S 1.33.X版本发布
参考链接:
https://landscape.cncf.io/
3 架构
1 master(contronl plane控制面板)
1.1 etcd
数据库,基于go语言研发
1.2 api-server
k8s集群的访问唯一入口,可以用于认证鉴权,资源访问控制
1.3 scherduler
调度器,负载完成pod的调度
1.4 contronller manager
控制器管理者,维护k8s集群状态
1.5 cloud contronller manager
该组件是可选组件,可以不部署,一般云厂商使用
2 slave(worker node工作节点)
2.1 kubelet
管理pod的生命周期,并监控worker node节点资源状态(cpu,内存,磁盘)和容器状态生命周期性上报给api-server
2.2 kube-proxy
负责完成pod的负载均衡组件,底层支持iptables和ipvs两种调度方式
3 CNI
falnnel,calico…
4 网络类型
1 K8S各组件通信的网络
使用时物理网卡,默认网段: 10.0.0.0/24。
2 跨节点容器实现通信的网段:
用户可以自定义,学习环境推荐: 10.100.0.0/16。
但是在自定义网段时,要考虑将来能够分片的IP地址数量,”10.100.0.0/16″最多有65536个IP地址。
如果将来容器运行的数量超过该规模时,应该考虑将网段地址调大,比如”10.0.0.0/8″。
2 Service网段:
为容器提供负载均衡和服务发现功能。也是需要一个独立的网段,比如”10.200.0.0/16″最多有65536个IP地址。
同理,如果规模较大时,应该考虑网段分配的问题。
5 部署方式
1 官方默认都有两种部署方式: (在生产环境中都可以使用,且都支持高可用环境。咱们学习过程中,建议选择kubeadm。)
1.1 二进制部署K8S集群
手动部署K8S各个组件,配置文件,启动脚本及证书生成,kubeconfig文件。
配置繁琐,对新手不友好,尤其是证书管理。但是可以自定义配置信息,老手部署的话2小时起步,新手20小时+
1.2 kubeadm部署K8S集群
是官方提供的一种快速部署K8S各组件的搭建,如果镜像准备就绪的情况下,基于容器的方式部署。
需要提前安装kubelet,docker或者containerd,kubeadm组件。
配置简单,适合新手。新手在镜像准备好的情况下,仅需要2分钟部署完毕。
2 第三方提供的部署方式:
2.1 国内公司:
– 青云科技: kubesphere
底层基于kubeadm快速部署K8S,提供了丰富的图形化管理界面。
– kuboard
底层基于kubeadm快速部署K8S,提供了丰富的图形化管理界面。
– kubeasz
底层基于二进制方式部署,结合ansible的playbook实现的快速部署管理K8S集群。
2.2 国外的产品:
– rancher:
和国内的kubesphere很相似,也是K8S发行商,提供了丰富的图形化管理界面。
还基于K8S研发出来了K3S,号称轻量级的K8S。
2.3 云厂商:
– 阿里云的ACK的SAAS产品
– 腾讯云的TKE的SAAS产品
– 华为云的CCE的SAAS产品
– ucloud的UK8S的SAAS产品
– 亚马逊的Amazon EKS的SAAS产品
– 京东云,百度云的SAAS产品等。
2.4 其他部署方式:
– minikube:
适合在windows部署K8S,适合开发环境搭建的使用。不建议生产环境部署。
– kind:
可以部署多套K8S环境,轻量级的命令行管理工具。
– yum:
不推荐,版本支持较低,默认是1.5.2。
CNCF技术蓝图:
https://landscape.cncf.io/
3 二进制部署和kubeadm部署的区别:
相同点:
都可以部署K8S高可用集群。
不同点:
– 1.部署难度: kubeadm简单.
– 2.部署时间: kubeadm短时间。
– 3.证书管理: 二进制需要手动生成,而kubeadm自建一个10年的CA证书,各组件证书有效期为1年。
– 4.软件安装: kubeadm需要单独安装kubeadm,kubectl和kubelet组件,由kubelet组件启动K8S其他相关Pod,而二进制需要安装除了kubeadm的其他K8S组件。
6 集群环境准备
1 环境准备
官方文档
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
环境准备:
硬件配置: 2core 4GB
磁盘: 50GB+
操作系统: Ubuntu 22.04.04 LTS
IP和主机名:
10.0.0.231 master231
10.0.0.232 worker232
10.0.0.233 worker233
k8s版本:1.23.17
原因:此版本是k8s最后一个支持docker-shim的版本,以后都不支持了
2 关闭swap分区(所有节点执行)
swapoff -a && sysctl -w vm.swappiness=0 # 临时关闭
sed -ri ‘/^[^#]*swap/s@^@#@’ /etc/fstab # 基于配置文件关闭
3 确保各个节点mac地址和product_uuid唯一(所有节点执行)
apt install net-tools
ifconfig ens37 | grep ether | awk ‘{print $2}’
cat /sys/class/dmi/id/product_uuid
4 检查网络是否通外网(所有节点执行)
ping baidu.com -c 10
5 允许iptables检查桥接流量(所有节点执行)
cat <
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
8.3 查看当前环境支持的k8s版本
apt-cache madison kubeadm
8.4 安装kubeadm,kubectl,kubelet
apt-get -y install kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
8.5 检查各组件版本
kubeadm version
kubelet –verion
kubectl version
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/
9 检查时区(所有节点执行)
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ll /etc/localtime
date -R
此时关机拍快照
7 基于kubeadm组件初始化k8s的master组件
这里所有操作均只对master节点操作
1 提前导入镜像
root@master231:~# docker load -i master-1.23.17.tar.gz
root@master231:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.23.17 62bc5d8258d6 2 years ago 130MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.17 1dab4fc7b6e0 2 years ago 120MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.23.17 bc6794cb54ac 2 years ago 51.9MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.17 f21c8d21558c 2 years ago 111MB
registry.aliyuncs.com/google_containers/etcd 3.5.6-0 fce326961ae2 2 years ago 299MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 3 years ago 46.8MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 3 years ago 683kB
2 使用kubeadm初始化节点
root@master231:~#kubeadm init –kubernetes-version=v1.23.17 –image-repository registry.aliyuncs.com/google_containers –pod-network-cidr=10.100.0.0/16 –service-cidr=10.200.0.0/16 –service-dns-domain=yangsenlin.top –apiserver-advertise-address=10.0.0.231
相关参数说明:
–kubernetes-version:
指定K8S master组件的版本号。
–image-repository:
指定下载k8s master组件的镜像仓库地址。
–pod-network-cidr:
指定Pod的网段地址。
–service-cidr:
指定SVC的网段
–service-dns-domain:
指定service的域名。若不指定,默认为”cluster.local”。
–apiserver-advertise-address
多网卡指定ip地址
使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init]
使用初始化的K8S版本。
[preflight]
主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。
[certs]
生成证书文件,默认存储在”/etc/kubernetes/pki”目录哟。
[kubeconfig]
生成K8S集群的默认配置文件,默认存储在”/etc/kubernetes”目录哟。
[kubelet-start]
启动kubelet,
环境变量默认写入:”/var/lib/kubelet/kubeadm-flags.env”
配置文件默认写入:”/var/lib/kubelet/config.yaml”
[control-plane]
使用静态的目录,默认的资源清单存放在:”/etc/kubernetes/manifests”。
此过程会创建静态Pod,包括”kube-apiserver”,”kube-controller-manager”和”kube-scheduler”
[etcd]
创建etcd的静态Pod,默认的资源清单存放在:””/etc/kubernetes/manifests”
[wait-control-plane]
等待kubelet从资源清单目录”/etc/kubernetes/manifests”启动静态Pod。
[apiclient]
等待所有的master组件正常运行。
[upload-config]
创建名为”kubeadm-config”的ConfigMap在”kube-system”名称空间中。
[kubelet]
创建名为”kubelet-config-1.22″的ConfigMap在”kube-system”名称空间中,其中包含集群中kubelet的配置
[upload-certs]
跳过此节点,详情请参考”–upload-certs”
[mark-control-plane]
标记控制面板,包括打标签和污点,目的是为了标记master节点。
[bootstrap-token]
创建token口令,例如:”kbkgsa.fc97518diw8bdqid”。
如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。
[kubelet-finalize]
更新kubelet的证书文件信息
[addons]
添加附加组件,例如:”CoreDNS”和”kube-proxy”
出现以下则说明成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.231:6443 –token v5peds.tj27wmcwsttl2ydk \
–discovery-token-ca-cert-hash sha256:829b9f6b642b548ec4e311028eb6d56c04f3f52240804aa631337ab43165af4d
3 拷贝授权文件,用于管理k8s集群
root@master231:~# mkdir -p $HOME/.kube
root@master231:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master231:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@master231:~#
4 查看master组件是否正常工作
root@master231:~# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-0 Healthy {“health”:”true”,”reason”:””}
controller-manager Healthy ok
cs是componentstatuses的缩写,所以两个命令效果是一样的
root@master231:~# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”:”true”,”reason”:””}
root@master231:~#
5 查看工作节点
root@master231:~# kubectl get no
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 4m30s v1.23.17
root@master231:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 4m36s v1.23.17
root@master231:~# kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master231 NotReady control-plane,master 4m46s v1.23.17 10.0.0.231
root@master231:~#
6.master初始化不成功解决问题的方法
可能存在的原因:
– 由于没有禁用swap分区导致无法完成初始化;
– 每个2core以上的CPU导致无法完成初始化;
– 没有手动导入镜像;
解决方案:
– 1.检查上面的是否有上面的情况
free -h
lscpu
– 2.重置当前节点环境
[root@master231 ~]# kubeadm reset -f
– 3.再次尝试初始化master节点
8 基于kubeadm部署worker组件
这里所有操作均只对两个worker节点操作,两个worker节点操作不分先后
1 提前导入镜像
root@worker232:~# docker load -i slave-1.23.17.tar.gz
root@worker232:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
flannel/flannel v0.24.3 f6f0ee58f497 13 months ago 78.6MB
flannel/flannel-cni-plugin v1.4.0-flannel1 77c1250c26d9 14 months ago 9.87MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.17 f21c8d21558c 2 years ago 111MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 3 years ago 46.8MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 3 years ago 683kB
root@worker232:~#
2 将worker节点加入到master集群
233节点
root@worker233:~# kubeadm join 10.0.0.231:6443 –token yur5x7.wl1dhvwgbif7n1ac \
–discovery-token-ca-cert-hash sha256:2ebc712f7a7355137b43d67d46c79844898acb4012c508b114f61cfbfb51875b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0402 11:50:12.272164 59289 utils.go:69] The recommended value for “resolvConf” in “KubeletConfiguration” is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
root@worker233:~#
232节点
root@worker232:~# kubeadm join 10.0.0.231:6443 –token yur5x7.wl1dhvwgbif7n1ac \
–discovery-token-ca-cert-hash sha256:2ebc712f7a7355137b43d67d46c79844898acb4012c508b114f61cfbfb51875b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0402 11:49:56.952952 59532 utils.go:69] The recommended value for “resolvConf” in “KubeletConfiguration” is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
root@worker232:~#
3 验证worker节点是否加入成功(master节点执行)
root@master231:~# kubectl get no
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 20m v1.23.17
worker232 NotReady
worker233 NotReady
root@master231:~# kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master231 NotReady control-plane,master 20m v1.23.17 10.0.0.231
worker232 NotReady
worker233 NotReady
root@master231:~#
6 有些时候开机启动cni0可能会消失,需要写一个检查脚本(每个节点执行)
root@master231:~# cat check_cni0.sh
#!/bin/bash
# 检查 cni0 网卡是否存在
if ! ip link show cni0 &> /dev/null; then
echo “cni0 网卡不存在,正在创建…”
# 获取 flannel.1 的 IP 地址段
FLANNEL_IP=$(ip -4 addr show flannel.1 | grep -oP ‘(?<=inet\s)\d+(\.\d+){3}')
if [ -z "$FLANNEL_IP" ] ; then
echo "无法获取 flannel.1 的 IP 地址或子网掩码,脚本退出。"
exit 1
fi
# 创建 cni0 网桥
ip link add name cni0 type bridge
# 为 cni0 分配与 flannel.1 相同的 IP 地址段
ip addr add $FLANNEL_IP/24 dev cni0
# 启用 cni0 网桥
ip link set cni0 up
echo "cni0 网卡已创建并配置完成。"
else
echo "cni0 网卡已存在,无需创建。"
fi
root@master231:~# cp check_cni0.sh /usr/local/bin/
root@master231:~# vim /etc/rc.local
root@master231:~# vi /etc/rc.local
root@master231:~# chmod +x /etc/rc.local
root@master231:~# cat /etc/rc.local
#!/bin/bash
/usr/local/bin/check_cni0.sh
root@master231:~# sh check_cni0.sh
cni0 网卡不存在,正在创建...
6: cni0:
link/ether b6:bd:ee:6b:55:5a brd ff:ff:ff:ff:ff:ff
RTNETLINK answers: File exists
cni0 网卡已创建并配置完成。
9 部署flannel的CNI插件
CNI
container network interface
1 部署网络插件地址
https://kubernetes.io/docs/concepts/cluster-administration/addons/
点击flannel跳转到github下载地址
https://github.com/flannel-io/flannel#deployslg-flannel-manually
复制flannel资源清单下载地址
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
下载下来后需要做两个步骤
1.1 因为下载的资源清单默认pod-network地址是10.244.0.0/16,需要修改为部署master组件时候设置的10.100.0.0/16
1.2 在kube-flannel.yml中搜索image,使用docker pull拉取涉及到的image(所有节点执行)
root@master231:~# cat kube-flannel.yml | grep image
image: docker.io/flannel/flannel:v0.25.6
image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
image: docker.io/flannel/flannel:v0.25.6
root@master231:~# docker pull docker.io/flannel/flannel:v0.25.6
root@master231:~# docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
1.3
3 获取资源清单并部署(master节点执行)
部署前集群环境如下
root@master231:~# kubectl get no
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 8m6s v1.23.17
worker232 NotReady
worker233 NotReady
root@master231:~# kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master231 NotReady control-plane,master 8m13s v1.23.17 10.0.0.231
worker232 NotReady
worker233 NotReady
root@master231:~# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
3 验证flannel是否部署成功(master节点执行)
root@master231:~# kubectl get no
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 39m v1.23.17
worker232 Ready
worker233 Ready
root@master231:~# kubectl get pods -o wide -n kube-flannel
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-4k9mm 1/1 Running 0 3m58s 10.0.0.233 worker233
kube-flannel-ds-bx879 1/1 Running 0 3m58s 10.0.0.232 worker232
kube-flannel-ds-sb5l9 1/1 Running 0 3m58s 10.0.0.231 master231
4 查看网卡
4.1 master231节点网卡
root@master231:~# ifconfig
docker0: flags=4099
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:b8:bc:95:18 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163
inet 192.168.137.231 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::250:56ff:fe37:70cb prefixlen 64 scopeid 0x20
ether 00:50:56:37:70:cb txqueuelen 1000 (Ethernet)
RX packets 614472 bytes 882596312 (882.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 110749 bytes 7048996 (7.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens37: flags=4163
inet 10.0.0.231 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::250:56ff:fe35:de69 prefixlen 64 scopeid 0x20
ether 00:50:56:35:de:69 txqueuelen 1000 (Ethernet)
RX packets 9874 bytes 1173652 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 117229 bytes 168823751 (168.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163
inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::c46b:8aff:fea2:d629 prefixlen 64 scopeid 0x20
ether c6:6b:8a:a2:d6:29 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 12 overruns 0 carrier 0 collisions 0
lo: flags=73
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 320546 bytes 63975014 (63.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 320546 bytes 63975014 (63.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@master231:~#
4.2 worker232节点
root@worker232:~# ifconfig
cni0: flags=4163
inet 10.100.1.1 netmask 255.255.255.0 broadcast 10.100.1.255
inet6 fe80::b4bd:eeff:fe6b:555a prefixlen 64 scopeid 0x20
ether ea:76:0a:f4:0b:1c txqueuelen 1000 (Ethernet)
RX packets 732 bytes 61883 (61.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 752 bytes 92948 (92.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:33:0f:11:e1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163
inet 192.168.137.232 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::250:56ff:fe26:cd31 prefixlen 64 scopeid 0x20
ether 00:50:56:26:cd:31 txqueuelen 1000 (Ethernet)
RX packets 184295 bytes 264386223 (264.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 33455 bytes 2134847 (2.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens37: flags=4163
inet 10.0.0.232 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::250:56ff:fe24:61f5 prefixlen 64 scopeid 0x20
ether 00:50:56:24:61:f5 txqueuelen 1000 (Ethernet)
RX packets 58631 bytes 84478895 (84.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4912 bytes 570292 (570.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163
inet 10.100.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::68c8:9aff:fe00:1525 prefixlen 64 scopeid 0x20
ether 6a:c8:9a:00:15:25 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 12 overruns 0 carrier 0 collisions 0
lo: flags=73
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 98 bytes 8093 (8.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 98 bytes 8093 (8.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth294b5b44: flags=4163
inet6 fe80::403f:99ff:fed5:45df prefixlen 64 scopeid 0x20
ether e2:a3:02:bc:65:b8 txqueuelen 0 (Ethernet)
RX packets 364 bytes 35909 (35.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 387 bytes 47387 (47.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth31aeeb30: flags=4163
inet6 fe80::b448:daff:fe51:b1c8 prefixlen 64 scopeid 0x20
ether b6:48:da:51:b1:c8 txqueuelen 0 (Ethernet)
RX packets 370 bytes 36306 (36.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 384 bytes 47147 (47.1 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@worker232:~#
4.3 worker233节点
root@worker233:~# ifconfig
docker0: flags=4099
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:46:66:68:37 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163
inet 192.168.137.233 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::250:56ff:fe27:1b99 prefixlen 64 scopeid 0x20
ether 00:50:56:27:1b:99 txqueuelen 1000 (Ethernet)
RX packets 520244 bytes 767347313 (767.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 44260 bytes 2788219 (2.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens37: flags=4163
inet 10.0.0.233 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::250:56ff:fe2c:85e1 prefixlen 64 scopeid 0x20
ether 00:50:56:2c:85:e1 txqueuelen 1000 (Ethernet)
RX packets 58657 bytes 84362405 (84.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5025 bytes 609905 (609.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163
inet 10.100.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::a840:1dff:fe4d:6f37 prefixlen 64 scopeid 0x20
ether aa:40:1d:4d:6f:37 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 12 overruns 0 carrier 0 collisions 0
lo: flags=73
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 110 bytes 9615 (9.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 110 bytes 9615 (9.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@worker233:~#
查看网络后我们发现
1 master231 多出来一个flannel.1虚拟网卡
2 worker232 节点多出来一个cni0网卡和flannel.1网卡,且是同网段
3 worker233 节点多出来一个flannel.1网卡
按道理来说master231和worker233节点也应该有一个cni0网卡,所以需要手动创建
cni0网卡作用是同节点中pod通信使用的
flannel网卡是集群之间通信
5 cni0网卡重建
注意:
手动创建重启机器后会丢失配置,最好是开机自动巡检
—> 假设 master231的flannel.1是10.100.0.0网段。
root@master231:~# ip link add cni0 type bridge
root@master231:~# ip link set dev cni0 up
root@master231:~# ip addr add 10.100.0.1/24 dev cni0
—> 假设 worker233的flannel.1是10.100.0.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0
5.1 查看master231节点的网卡
root@master231:~# ifconfig
cni0: flags=4099
inet 10.100.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether b6:bd:ee:6b:55:5a txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163
inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::c46b:8aff:fea2:d629 prefixlen 64 scopeid 0x20
ether c6:6b:8a:a2:d6:29 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 14 overruns 0 carrier 0 collisions 0
5.2 查看worker232节点网卡
root@worker233:~# ifconfig
cni0: flags=4099
inet 10.100.2.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether b6:bd:ee:6b:55:5a txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163
inet 10.100.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::a840:1dff:fe4d:6f37 prefixlen 64 scopeid 0x20
ether aa:40:1d:4d:6f:37 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 14 overruns 0 carrier 0 collisions 0
10 验证pod的CNI网络是否正常
1 编写pod资源清单
root@master231:~# mkdir -pv /manifests/pods
cd /manifests/pods
root@master231:/manifests/pods# cat > network-cni-test.yaml <
xiuxian-v2 1/1 Running 0 7s 10.100.2.5 worker233
11 kubectl 工具实现自动补全功能
root@master231:/manifests/pods# kubectl completion bash > ~/.kube/completion.bash.inc
root@master231:/manifests/pods# echo source ‘$HOME/.kube/completion.bash.inc’ >> ~/.bashrc
root@master231:/manifests/pods# source ~/.bashrc
root@master231:/manifests/pods#
再次关机拍快照
12 主机巡检流程
1 查看master组件是否正常
root@master231:~# kubectl get cs
2 查看工作节点是否就绪
root@master231:~# kubectl get no
3 查看flannel组件是否正常运行
root@master231:~# kubectl get pods -o wide -n kube-flannel
4 检查网卡flannel.1和cni0
root@master231:~# ifconfig
13 部署k8s可能会出现的错误
1 时区配置错误;
2 初始化失败可能是cpu核心不足,内存没有禁用swap
3 镜像拉取失败,在对应节点手动导入镜像
4 节点名称不一致,需要修改过来,建议重做,加深印象;
5 flannel.1和cni0网段不一致,删除cni0网卡继续执行
ip link del cni0 type bridge
6 虚拟机无法联网,检查配置是否正确:
7 虚拟机开不起来了;
14 k8s资源
linux 一切皆文件,k8s一切皆资源
1 查看k8s集群内置资源
随着后期的附加组件部署,资源的种类会增多
root@master231:~# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
…
1.1 NAME
资源的名称
1.2 SHORTNAMES
资源的简称
1.3 APIVERSION
资源的api版本号
1.4 NAMESPACED
是否支持名称空间
1.5 KIND
资源的类型
15 资源清单的结构
1 apiVersion:
资源的API版本号,固定的,但是定义时必须声明
2 kind
资源的类型。固定的,但是定义时必须声明
3 metadata
元数据资源,用于描述资源的信息,包括但不限于:name,labels,namespace,annotations
4 spec
期望资源运行的状态
5 status
资源的实际状态,由k8s自行维护
16 资源清单的创建、查看和删除
1 编写资源清单
root@master231:/manifests/pods# cat 01-pods-xiuxian.yaml
#声明资源的版本号
apiVersion: v1
#声明资源的类型
kind: Pod
#声明资源的元数据信息
metadata:
#声明资源的名称
name: xiuxian
#定义资源的期望状态
spec:
#定义容器的相关配置
containers:
#定义容器的名称
– name: c1
#定义镜像名称
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
2 创建资源
2.1 create(非幂等性)
kubectl create -f 01-pods-xiuxian.yaml
2.2 apply(幂等性)
kubectl apply -f 01-pods-xiuxian.yaml
幂等操作:多次执行相同的操作,结果保持一致。
非幂等操作:多次执行相同的操作,可能会导致不同的结果
3 查看资源
root@master231:/manifests/pods# kubectl apply -f 01-pods-xiuxian.yaml
pod/xiuxian created
root@master231:/manifests/pods# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian 1/1 Running 0 38s 10.100.2.8 worker233
相关资源说明:
3.1 NAME
资源的名称。
3.2 READY:
以”/”为分隔符,右侧的数字表示Pod有多少个容器。左侧的数字表示Pod内有多少个容器处于运行状态。
3.3 STATUS:
资源是否处于就绪状态,目前认为”Running”就表示资源处于正常运行状态。
3.4 RESTARTS:
Pod内容器的重启次数总和。
3.5 AGE:
表示该资源创建的时间统计。
3.6 IP:
表示Pod的IP地址。
3.7 NODE:
表示Pod调度到哪个worker node节点。
4 删除资源
root@master231:/manifests/pods# kubectl delete -f 01-pods-xiuxian.yaml
pod “xiuxian” deleted
root@master231:/manifests/pods# kubectl get pods -o wide
No resources found in default namespace.
5 声明是标签管理的pod资源删除
root@master231:/manifests/pods# kubectl apply -f 02-pods-xiuxian-labels.yaml
pod/xiuxian-labels created
root@master231:/manifests/pods# cat 02-pods-xiuxian-labels.yaml
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-labels
#给资源打标签,标签的key和value都可以自定义
labels:
xingming: zhangsan
city: Sichuan
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:/manifests/pods#
6 查看资源标签
root@master231:/manifests/pods# kubectl get pods -o wide –show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
xiuxian-labels 1/1 Running 0 52s 10.100.2.9 worker233
7 基于标签匹配pod,从而删除pod
root@master231:/manifests/pods# kubectl delete pods -l city=Sichuan
pod “xiuxian-labels” deleted
root@master231:/manifests/pods#
8 基于资源的名称删除pod
root@master231:/manifests/pods# kubectl delete pod xiuxian-labels
pod “xiuxian-labels” deleted
17 响应式管理pod资源
1 创建pod资源
root@master231:/manifests/pods# kubectl run xiuxian –image=crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
pod/xiuxian created
2 查看pod资源
root@master231:/manifests/pods# kubectl get pods
NAME READY STATUS RESTARTS AGE
xiuxian 1/1 Running 0 29s
3 给资源打标签
root@master231:/manifests/pods# kubectl get pods –show-labels
NAME READY STATUS RESTARTS AGE LABELS
xiuxian 1/1 Running 0 73s run=xiuxian
root@master231:/manifests/pods# kubectl label pod xiuxian xingming=zhangsan city=Sichuan
pod/xiuxian labeled
root@master231:/manifests/pods# kubectl get pods –show-labels
NAME READY STATUS RESTARTS AGE LABELS
xiuxian 1/1 Running 0 4m57s city=Sichuan,run=xiuxian,xingming=zhangsan
root@master231:/manifests/pods#
4 修改资源标签
root@master231:/manifests/pods# kubectl label pods xiuxian city=cq –overwrite
pod/xiuxian unlabeled
root@master231:/manifests/pods# kubectl get pods –show-labels
NAME READY STATUS RESTARTS AGE LABELS
xiuxian 1/1 Running 0 8m15s city=cq,run=xiuxian,xingming=zhangsan
root@master231:/manifests/pods#
5 删除资源
root@master231:/manifests/pods# kubectl delete pods –all
pod “xiuxian” deleted
18 指定nodeName节点调度
1 编写资源清单
root@master231:/manifests/pods# cat 03-pods-xiuxian-nodeName.yaml
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-nodename
labels:
xingming: zhangsan
city: Sichuan
spec:
nodeName: worker232
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
2 部署应用
root@master231:/manifests/pods# kubectl apply -f 03-pods-xiuxian-nodeName.yaml
pod/xiuxian-nodename created
root@master231:/manifests/pods#
19 hostNetwork使用宿主机网络
[root@master231 pods]# cat 04-pods-xiuxian-hostNetwork.yaml
apiVersion: v1
kind: Pod
metadata:
name: ysl-ysl-hostnetwork
labels:
school: ysl
class: ysl
spec:
# 使用宿主机网络,不为容器分配网络名称空间
hostNetwork: true
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
[root@master231 pods]#
2 部署服务
root@master231:/manifests/pods# kubectl apply -f 04-pods-xiuxian-hostNetwork.yaml
pod/xiuxian-hostnetwork created
3 windows访问测试
http://10.0.0.233
20 将多个资源合并为一个资源清单
1 同时执行多个资源清单文件创建多个资源
root@master231:/manifests/pods# kubectl apply -f 03-pods-xiuxian-nodeName.yaml -f 04-pods-xiuxian-hostNetwork.yaml
pod/xiuxian-nodename created
pod/xiuxian-hostnetwork created
2 将多个资源卸载同一个资源清单中
root@master231:/manifests/pods# cat 05-pods-all-in-one-file.yaml
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v1
labels:
apps: v1
spec:
hostNetwork: true
nodeName: worker232
containers:
– image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
name: xiuxian
# 此处的”—“表示一个文件的结束
—
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v2
labels:
apps: v2
spec:
hostNetwork: true
nodeName: worker233
containers:
– image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
root@master231:/manifests/pods# kubectl apply -f 05-pods-all-in-one-file.yaml
pod/xiuxian-v1 created
pod/xiuxian-v2 created
21 容器的类型
1 基础架构容器
pause:3.6
优先于业务容器启动,负责名称空间的初始化工作。
其中,基础架构容器的ipc,net,time,user这四个名称空间会被业务容器共享
2 初始化容器
初始化容器在基础架构容器之后启动,在业务容器之前运行的容器
业务容器可以定义多个,当所有的初始化容器都执行完毕后才回去执行业务容器
一般初始化容器的应用场景是为业务容器做一些初始化的动作而设定的,如果没有必要,可以不定义初始化容器
3 业务容器
指的是实际用户运行的容器
4 删除业务容器时,业务容器会自动启动,启动后地址并不会变动
5 删除基础架构容器时,此基础架构容器会重新初始化,导致ip地址变动
6 验证基础架构容器和业务容器的关系
6.1 在master231节点上部署应用
root@master231:/manifests/pods# kubectl apply -f 05-pods-all-in-one-file.yaml
pod/xiuxian-v1 created
pod/xiuxian-v2 created
6.2 在worker233节点上查看容器
root@worker233:~# docker ps -a | grep xiuxian-
2b49ec88c400 d65adc8a2f32 “/docker-entrypoint.…” 19 seconds ago Up 18 seconds k8s_xiuxian_xiuxian-v2_default_0cd7f72e-cd5b-4560-a731-8d9be81c6577_0
e67f1c3cfa44 f28fd43be4ad “/docker-entrypoint.…” 19 seconds ago Up 18 seconds k8s_xiuxian_xiuxian-v1_default_8d1b9146-2542-485e-b1d5-1d3319a0a515_0
3bb838ecd88c registry.aliyuncs.com/google_containers/pause:3.6 “/pause” 19 seconds ago Up 18 seconds k8s_POD_xiuxian-v2_default_0cd7f72e-cd5b-4560-a731-8d9be81c6577_0
2d49b42ccea4 registry.aliyuncs.com/google_containers/pause:3.6 “/pause” 19 seconds ago Up 18 seconds k8s_POD_xiuxian-v1_default_8d1b9146-2542-485e-b1d5-1d3319a0a515_0
6.3 在worker233节点查看xiuxian:v1的基础架构容器进程和业务容器进程
root@worker233:~# docker inspect -f “{{.State.Pid}}” 2d49b42ccea4
257658
root@worker233:~# docker inspect -f “{{.State.Pid}}” e67f1c3cfa44
257870
root@worker233:~# ll /proc/257658/ns ####基础架构容器的名称空间
total 0
dr-x–x–x 2 65535 65535 0 Apr 6 20:25 ./
dr-xr-xr-x 9 65535 65535 0 Apr 6 20:25 ../
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 cgroup -> ‘cgroup:[4026532744]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:25 ipc -> ‘ipc:[4026532658]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 mnt -> ‘mnt:[4026532656]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:25 net -> ‘net:[4026532671]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 pid -> ‘pid:[4026532659]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 pid_for_children -> ‘pid:[4026532659]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 time -> ‘time:[4026531834]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 time_for_children -> ‘time:[4026531834]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 user -> ‘user:[4026531837]’
lrwxrwxrwx 1 65535 65535 0 Apr 6 20:29 uts -> ‘uts:[4026532657]’
root@worker233:~# ll /proc/257870/ns ####业务容器的名称空间
total 0
dr-x–x–x 2 root root 0 Apr 6 20:30 ./
dr-xr-xr-x 9 root root 0 Apr 6 20:25 ../
lrwxrwxrwx 1 root root 0 Apr 6 20:30 cgroup -> ‘cgroup:[4026532820]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 ipc -> ‘ipc:[4026532658]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 mnt -> ‘mnt:[4026532817]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 net -> ‘net:[4026532671]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 pid -> ‘pid:[4026532819]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 pid_for_children -> ‘pid:[4026532819]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 time -> ‘time:[4026531834]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 time_for_children -> ‘time:[4026531834]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 user -> ‘user:[4026531837]’
lrwxrwxrwx 1 root root 0 Apr 6 20:30 uts -> ‘uts:[4026532818]’
从对比来看,业务容器的名称空间和基础架构容器的名称空间有以下4点是相同的
ipc,net,time,user
7 验证初始化容器和业务容器的执行顺序
root@master231:/manifests/pods# cat 06-pods-xiuxian-initContainers.yaml
apiVersion: v1
kind: Pod
metadate:
name: xiuxian-initcontainers
labels:
xm: yanzhenginitcontainers
spec:
nodeName: worker233
initContainers:
– name: init01
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
# 用于替代Dockerfile的ENTRYPOINT指令
command: [“sleep”,”30″]
– name: init02
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command: [“sleep”,”10″]
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
[root@master231 pods]# kubectl apply -f 06-pods-xiuxian-initContainers.yaml
[root@worker233 ~]# docker ps -a | grep xiuxian-initcontainers
8 一个pod启动多个容器
root@master231:/manifests/pods# kubectl apply -f 07-pods-multiple.yaml
pod/multiple-containers created
root@master231:/manifests/pods# kubectl get pods
NAME READY STATUS RESTARTS AGE
multiple-containers 2/2 Running 0 4s
root@master231:/manifests/pods# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
multiple-containers 2/2 Running 1 (6s ago) 16s 10.100.2.18 worker233
root@master231:/manifests/pods# cat 07-pods-multiple.yaml
apiVersion: v1
kind: Pod
metadata:
name: multiple-containers
labels:
xm: zhangsan
city: Sichuan
spec:
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command: [“sleep”,”30″]
– name: c2
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command: [“sleep”,”10″]
root@master231:/manifests/pods#
22 pod排障命令
1 查看描述describe
root@master231:/manifests/pods# cat 08-pods-describe.yaml
apiVersion: v1
kind: Pod
metadata:
name: multiple-describe
labels:
name: describe
spec:
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v111111111
root@master231:/manifests/pods# kubectl apply -f 08-pods-describe.yaml
pod/multiple-describe created
root@master231:/manifests/pods# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
multiple-describe 0/1 ErrImagePull 0 14s 10.100.2.19 worker233
root@master231:/manifests/pods#
root@master231:/manifests/pods# kubectl describe pod multiple-describe
…
Normal Pulling 69s (x4 over 2m36s) kubelet Pulling image “crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v111111111”
Warning Failed 68s (x4 over 2m34s) kubelet Failed to pull image “crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v111111111”: rpc error: code = Unknown desc = Error response from daemon: manifest for crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v111111111 not found: manifest unknown: manifest unknown
Warning Failed 68s (x4 over 2m34s) kubelet Error: ErrImagePull
Warning Failed 41s (x6 over 2m34s) kubelet Error: ImagePullBackOff
Normal BackOff 27s (x7 over 2m34s) kubelet Back-off pulling image “crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v111111111”
root@master231:/manifests/pods#
此处就可以定位问题
修改资源清单中的错误即可
2 执行命令exec
root@master231:/manifests/pods# kubectl delete -f 08-pods-describe.yaml
pod “multiple-describe” deleted
root@master231:/manifests/pods# kubectl apply -f 09-pods-exec.yaml
pod/ysl-multiple-exec created
root@master231:/manifests/pods# cat 09-pods-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: multiple-exec
labels:
xm: zhangsan
city: Sichuan
spec:
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command: [“tail”,”-f”,”/etc/hosts”]
– name: c2
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command: [“sleep”,”3600″]
2.1 执行命令查看
root@master231:/manifests/pods# kubectl exec -it multiple-exec -c c1 — ifconfig
eth0 Link encap:Ethernet HWaddr 1A:7B:FD:6C:92:F9
inet addr:10.100.2.21 Bcast:10.100.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:666 (666.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@master231:/manifests/pods#
2.2 进入容器查看
root@master231:/manifests/pods# kubectl exec -it multiple-exec -c c1 — sh
/ # ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if16:
link/ether 1a:7b:fd:6c:92:f9 brd ff:ff:ff:ff:ff:ff
inet 10.100.2.21/24 brd 10.100.2.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
3 查看日志 logs
root@master231:/manifests/pods# cat 10-pods-logs.yaml
apiVersion: v1
kind: Pod
metadata:
name: multiple-logs
labels:
xingming: zhangsan
city: Sichuan
spec:
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:/manifests/pods# kubectl apply -f 10-pods-logs.yaml
pod/multiple-logs created
root@master231:/manifests/pods# kubectl get pods
NAME READY STATUS RESTARTS AGE
multiple-logs 1/1 Running 0 14s
3.1 查看实时日志
root@master231:/manifests/pods# kubectl logs -f multiple-logs
3.2 查看最近2分钟日志
root@master231:/manifests/pods# kubectl logs -f –since 2m multiple-logs
3.3 查看指定容器的日志
root@master231:/manifests/pods# kubectl logs -f –since 2m multiple-logs -c c1
3.4 查看容器重启前上一个容器的日志(前提是该容器还存在)
root@master231:/manifests/pods# kubectl logs -f multiple-logs -c c1 -p
4 重启策略restartPolicy
root@master231:/manifests/pods# cat 11-pods-restartPolicy.yaml
apiVersion: v1
kind: Pod
metadata:
name: restartpolicy
labels:
xingming: zhangsan
city: Sichuan
spec:
# 指定容器的重启策略,默认值为Always,有效值为: Always, OnFailure,Never。
# Always:
# 无论容器是否正常退出,始终重启容器。
# Never:
# 无论容器是否正常退出,始终不重启容器。
# OnFailure:
# 当容器异常退出时,才会重启容器,正常退出时不重启。
# restartPolicy: Always
# restartPolicy: Never
restartPolicy: OnFailure
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
# command: [“sleep”,”10″]
command:
– “sleep”
– “30”
root@master231:/manifests/pods# kubectl apply -f 11-pods-restartPolicy.yaml
5 复制 cp
root@master231:/manifests/pods# cat 12-pods-cp.yaml
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-cp
labels:
xingming: zhangsan
city: Sichuan
spec:
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
5.1 将宿主机的文件拷贝到容器的指定目录
root@master231:/manifests/pods# kubectl cp 12-pods-cp.yaml xiuxian-cp:/12-pods-cp.yaml
root@master231:/manifests/pods# kubectl exec -it xiuxian-cp — ls -l /
total 76
-rw-r–r– 1 root root 253 Apr 6 13:16 12-pods-cp.yaml
5.2 将容器内的文件拷贝到宿主机
root@master231:/manifests/pods# kubectl cp xiuxian-cp:/docker-entrypoint.sh /root/docker-entrypoint.sh
tar: removing leading ‘/’ from member names
root@master231:/manifests/pods# ll /root/docker-entrypoint.sh
-rw-r–r– 1 root root 1202 Apr 6 21:17 /root/docker-entrypoint.sh
注意:
容器和宿主机之间拷贝东西都是文件对文件,目录对目录
6 文档说明 explain
root@master231:/manifests/pods# cat 13-pods-explain.yaml
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-explain
labels:
xingming: zhangsan
city: Sichuan
spec:
restartpolicy: Always #此处故意写错测试
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:/manifests/pods# kubectl apply -f 13-pods-explain.yaml
error: error validating “13-pods-explain.yaml”: error validating data: ValidationError(Pod.spec): unknown field “restartpolicy” in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with –validate=false
6.1 查看explain说明文档看有哪些字段
root@master231:/manifests/pods# kubectl explain pod.spec
[root@master231 pods]# kubectl explain po.spec.containers.command
[root@master231 pods]# kubectl explain po.spec.containers.args
[root@master231 pods]# kubectl explain po.spec.containers.ports
[root@master231 pods]# kubectl explain po.metadata
[root@master231 pods]# kubectl explain po.metadata.labels
…
7 入口命令和参数 command args
root@master231:/manifests/pods# cat 14-pods-command-args.yaml
apiVersion: v1
kind: Pod
metadata:
name: command-args
labels:
xingming: zhangsan
city: Sichuan
spec:
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
– name: c2
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
# 相当于替换ENTRYPOINT指令
#command:
#- tail
#- -f
#- /etc/hosts
command:
– tail
# 相当于替换CMD指令,当command和args同时使用时,args将作为参数传递给command
args:
– -f
– /etc/hosts
root@master231:/manifests/pods# kubectl apply -f 14-pods-command-args.yaml
pod/command-args created
7.1 查看容器的启动命令
root@worker233:~# docker ps -a | grep command-args
a8106258232d f28fd43be4ad “tail -f /etc/hosts” About a minute ago Up About a minute k8s_c2_command-args_default_f39178d8-bfee-4d4f-8350-14622951e3ba_0
e1522eda135f f28fd43be4ad “/docker-entrypoint.…” About a minute ago Up About a minute k8s_c1_command-args_default_f39178d8-bfee-4d4f-8350-14622951e3ba_0
ec9be95027f8 registry.aliyuncs.com/google_containers/pause:3.6 “/pause” About a minute ago Up About a minute k8s_POD_command-args_default_f39178d8-bfee-4d4f-8350-14622951e3ba_0
23 端口暴露
root@master231:/manifests/pods# cat 15-pods-ports.yaml
apiVersion: v1
kind: Pod
metadata:
name: ysl-ports
labels:
xingming: zhangsan
city: Sichuan
spec:
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
# 配置容器的端口映射,如果容器监听端口,此配置应该配置上便于用户识别。
# 当然,尽管你的容器监听了80端口,不配置此字段也是可以的,但用户体验不好。
ports:
# 容器监听的端口
– containerPort: 80
# 表示容器端口使用的协议,支持的协议为: UDP, TCP, or SCTP
protocol: TCP
# 给端口起名字
name: nginx
# 绑定worker节点的IP地址
hostIP: 0.0.0.0
# 将容器绑定到worker节点的81端口,有点类似于docker的端口映射: docker run -p 81:80 …
# 一旦配置了该字段,就会修改worker节点的iptables规则,这一点和docker很相似,但是docker会监听端口
hostPort: 81
– containerPort: 9200
name: es-http
– containerPort: 9300
name: es-tcp
– containerPort: 21
– containerPort: 20
root@master231:/manifests/pods# kubectl apply -f 15-pods-ports.yaml
pod/ysl-ports created
root@master231:/manifests/pods# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ysl-ports 1/1 Running 0 111s 10.100.2.26 worker233
root@master231:/manifests/pods#
root@worker233:~# iptables-save | grep 81
-A CNI-DN-def73841a3876e584d36b -p tcp -m tcp –dport 81 -j DNAT –to-destination 10.100.2.26:80
24 安装各种应用
安装gitlab、jenkins、sonarqube、mysql、wordpress、elasticsearch、kibana
提前导入镜像,或者官网下载
1 gitlab
root@master231:/manifests/pods# cat 16-pods-casedemo-gitlab.yaml
apiVersion: v1
kind: Pod
metadata:
name: gitlab
labels:
apps: gitlab
spec:
hostNetwork: true
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: gitlab/gitlab-ce:17.5.2-ce.0
ports:
– containerPort: 80
root@master231:/manifests/pods# kubectl apply -f 16-pods-casedemo-gitlab.yaml
pod/gitlab created
root@master231:/manifests/pods# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINE
gitlab 1/1 Running 0 11s 10.0.0.233 worker233
1.1 查看初始密码
root@master231:/manifests/pods# kubectl exec -it gitlab — cat /etc/gitlab/initial_root_password
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails[‘initial_root_password’]` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
# 2. Password hasn’t been changed manually, either via UI or via command line.
#
# If the password shown here doesn’t work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: uhyTfFUBOH55+kasoiSkeD6n4g0gD4YXunE5UW14ddQ=
1.2 登录测试
http://192.168.137.233
root/密码
2 jenkins
root@master231:/manifests/pods# cat 17-pod-jenkins.yaml
apiVersion: v1
kind: Pod
metadata:
name: jenkins
labels:
apps: jenkins
spec:
hostNetwork: true
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: jenkins/jenkins:2.479.1-alpine-jdk21
ports:
– containerPort: 8080
root@master231:/manifests/pods# kubectl apply -f 17-pod-jenkins.yaml
pod/jenkins created
root@master231:/manifests/pods#
2.1 查看密码
root@master231:/manifests/pods# kubectl logs jenkins
…
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
cd814b0c82284265820e5cba45998658
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
*************************************************************
*************************************************************
*************************************************************
root@master231:/manifests/pods# kubectl exec -it jenkins — cat /var/jenkins_home/secrets/initialAdminPassword
cd814b0c82284265820e5cba45998658
root@master231:/manifests/pods#
2.2 访问测试
http://192.168.137.233:8080
3 sonarqube
root@master231:/manifests/pods# cat 18-pods-sonarqube.yaml
apiVersion: v1
kind: Pod
metadata:
name: sonarqube
labels:
apps: sonarqube
spec:
hostNetwork: true
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: sonarqube:9.9.7-community
ports:
– containerPort: 9000
root@master231:/manifests/pods# kubectl apply -f 18-pods-sonarqube.yaml
pod/sonarqube created
3.1访问测试
http://192.168.137.233:9000
4 mysql
root@master231:/manifests/pods# cat 19-pods-mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql-env
labels:
apps: mysql
spec:
hostNetwork: true
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
image: mysql:8.0.36-oracle
# 启动容器时,传递参数
args:
– –character-set-server=utf8
– –collation-server=utf8_bin
– –default-authentication-plugin=mysql_native_password
ports:
– containerPort: 3306
# 向容器传递环境变量
env:
# 指定变量的名称
– name: MYSQL_ALLOW_EMPTY_PASSWORD
# 向变量传递值
value: “yes”
– name: MYSQL_DATABASE
value: test
– name: MYSQL_USER
value: test
– name: MYSQL_PASSWORD
value: “123456”
root@master231:/manifests/pods# kubectl apply -f 19-pods-mysql.yaml
pod/mysql-env created
4.1 测试
root@master231:/manifests/pods# kubectl exec -it mysql-env — mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.36 MySQL Community Server – GPL
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql>
5 wordpress
root@master231:/manifests/pods# cat 20-pods-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
name: wp
labels:
apps: mysql
spec:
hostNetwork: true
restartPolicy: Always
nodeName: worker233
containers:
– name: c1
# WordPress关于下面2个镜像启动的是9000端口,默认启用了php-fpm进程
# image: wordpress:php8.3-fpm-alpine
# image: wordpress:6.7.1-php8.3-fpm-alpine
# 推荐使用下面的镜像,默认启动的是80端口,进程包含apache
image: wordpress:6.7.1-php8.1-apache
ports:
– containerPort: 80
env:
– name: WORDPRESS_DB_HOST
value: 10.0.0.233
– name: WORDPRESS_DB_NAME
value: wordpress
– name: WORDPRESS_DB_USER
value: wordpress
– name: WORDPRESS_DB_PASSWORD
value: “123456”
[root@master231 pods]# kubectl apply -f 20-pods-wordpress.yaml
pod/wp created
[root@master231 pods]#
[root@master231 pods]# kubectl get pods -o wide
5.1 访问测试
http://192.168.137.233
6 elasticsearch
root@master231:/manifests/pods# cat 21-pods-es.yaml
apiVersion: v1
kind: Pod
metadata:
name: single-es
labels:
apps: es7
spec:
hostNetwork: true
nodeName: worker233
containers:
– name: c1
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.24
ports:
– containerPort: 9200
name: http
– containerPort: 9300
name: tcp
env:
– name: discovery.type
value: single-node
– name: cluster.name
value: es7
– name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
root@master231:/manifests/pods# kubectl apply -f 21-pods-es.yaml
pod/single-es created
6.1 访问测试
root@master231:/manifests/pods# curl http://10.0.0.233:9200
7 kibana
root@master231:/manifests/pods# cat 22-pods-kibana.yaml
apiVersion: v1
kind: Pod
metadata:
name: kibana
labels:
apps: kibana
spec:
hostNetwork: true
nodeName: worker233
containers:
– name: c1
image: docker.elastic.co/kibana/kibana:7.17.24
ports:
– containerPort: 5601
name: webui
env:
– name: ELASTICSEARCH_HOSTS
value: http://10.0.0.233:9200
– name: I18N_LOCALE
value: zh-CN
root@master231:/manifests/pods# kubectl apply -f 22-pods-kibana.yaml
pod/kibana created
7.1 访问测试
http://10.0.0.233:5601/
25 控制器
常见控制器:
rc,rs,deployment,ds,job,cj
使用kubectl api-resources查看以下所有控制器
1 rc
1.1 rc概述
rc的全称为“replicationcontronllers”,表示副本控制器
1.2 创建资源并部署
root@master231:/manifests/replicationcontrollers# cat 01-rc-xiuxian.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: xiuxian-rc
labels:
xingming: zhangsan
spec:
# 创建Pod的副本数量
replicas: 3
# 当前控制器关联的Pod标签,如果不写,则默认为template字段中的pod标签。
selector:
apps: xiuxian
city: Sichuan
# 定义Pod的模板
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:/manifests/replicationcontrollers# kubectl apply -f 01-rc-xiuxian.yaml
replicationcontroller/xiuxian-rc created
root@master231:/manifests/replicationcontrollers#
root@master231:/manifests/replicationcontrollers# kubectl get rc,pods –show-labels -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR LABELS
replicationcontroller/xiuxian-rc 3 3 3 44s c1 crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 apps=xiuxian,city=Sichuan xingming=zhangsan
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod/xiuxian-rc-7dv8t 1/1 Running 0 44s 10.100.1.22 worker232
pod/xiuxian-rc-p84m5 1/1 Running 0 44s 10.100.1.21 worker232
pod/xiuxian-rc-qhx6g 1/1 Running 0 44s 10.100.1.23 worker232
1.3 删除pod和删除rc资源对比
1.3.1 删除pod
root@master231:/manifests/replicationcontrollers# kubectl delete pods –all
pod “xiuxian-rc-7dv8t” deleted
pod “xiuxian-rc-p84m5” deleted
pod “xiuxian-rc-qhx6g” deleted
root@master231:/manifests/replicationcontrollers# kubectl get rc,pods
NAME DESIRED CURRENT READY AGE
replicationcontroller/xiuxian-rc 3 3 3 2m56s
NAME READY STATUS RESTARTS AGE
pod/xiuxian-rc-bnsdw 1/1 Running 0 12s
pod/xiuxian-rc-mqvhw 1/1 Running 0 12s
pod/xiuxian-rc-rzvtv 1/1 Running 0 12s
1.3.2 删除rc
root@master231:/manifests/replicationcontrollers# kubectl delete xiuxian-rc
replicationcontroller “xiuxian-rc” deleted
root@master231:/manifests/replicationcontrollers# kubectl get rc,pods
No resources found in default namespace.
结论:
rc控制器部署资源后,删除pod是无效的,会被rc重新拉起来,但是删除rc后,pod会被级联删除
2 rs
2.1 rs概述
replicaset和rc功能类似,也是保证pod副本数量始终存活,但是比rc更加轻量级,功能更加完善
2.2 创建资源并部署
root@master231:~/manifests/replicasets# cat 01-rs-matchLabels-xiuxian.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-xiuxian
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
# 基于标签匹配Pod
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/replicasets# kubectl apply -f 01-rs-matchLabels-xiuxian.yaml
replicaset.apps/rs-xiuxian created
root@master231:~/manifests/replicasets# kubectl get pods
NAME READY STATUS RESTARTS AGE
rs-xiuxian-9mbr7 1/1 Running 0 4s
rs-xiuxian-rbfh2 1/1 Running 0 4s
rs-xiuxian-rr6pd 1/1 Running 0 4s
rs-xiuxian-t4nlz 1/1 Running 0 4s
rs-xiuxian-w82zk 1/1 Running 0 4s
2.3 rs可以实现rc没有的功能
2.3.1 环境准备
root@master231:~/manifests/replicasets# kubectl run test01 –image=crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 -l apps=v1
pod/test01 created
root@master231:~/manifests/replicasets# kubectl run test02 –image=crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2 -l apps=v2
pod/test02 created
2.3.2 部署应用
root@master231:~/manifests/replicasets# cat 02-rs-matchExpressions-xiuxian.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-xiuxian
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
# 基于标签表达式匹配Pod标签
matchExpressions:
# 声明标签的key
– key: apps
# 声明标签的value,可以有多个值
values:
– xiuxian
– v1
– v2
# 表示key和value之间的映射关系,有效值为: In, NotIn, Exists and DoesNotExist
operator: In
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/replicasets# kubectl apply -f 02-rs-matchExpressions-xiuxian.yaml replicaset.apps/rs-xiuxian-matchexpressions created
2.3.3 查看结果
root@master231:~/manifests/replicasets# kubectl get rs,pods –show-labels -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR LABELS
replicaset.apps/rs-xiuxian-matchexpressions 5 5 5 16s c1 crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 apps in (v1,v2,xiuxian) xingming=zhangsan
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod/rs-xiuxian-matchexpressions-586h4 1/1 Running 0 16s 10.100.1.56 worker232
pod/rs-xiuxian-matchexpressions-7nmpp 1/1 Running 0 16s 10.100.1.58 worker232
pod/rs-xiuxian-matchexpressions-bbw8v 1/1 Running 0 16s 10.100.1.57 worker232
pod/test01 1/1 Running 0 40s 10.100.1.55 worker232
pod/test02 1/1 Running 0 47s 10.100.1.54 worker232
2.3.4 结论
rs可以创建5个副本,但是已经匹配到有两个标签存在,所以值创建了3个副本
3 deploy
3.1 概述
deployment底层基于rs实现pod副本控制,deploy支持“声明式”更新
3.2 应用部署
3.2.1 matchLables
root@master231:~/manifests# mkdir deployments
root@master231:~/manifests# cd deployments/
root@master231:~/manifests/deployments# kubectl apply -f 01-deploy-matchLables-xiuxian.yaml
deployment.apps/deploy-xiuxian created
root@master231:~/manifests/deployments# kubectl get deploy,rs,pods -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/deploy-xiuxian 5/5 5 5 4s c1 crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 apps=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/deploy-xiuxian-66b55568cd 5 5 5 4s c1 crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 apps=xiuxian,pod-template-hash=66b55568cd
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/deploy-xiuxian-66b55568cd-csqzt 1/1 Running 0 4s 10.100.1.78 worker232
pod/deploy-xiuxian-66b55568cd-kkwpz 1/1 Running 0 4s 10.100.1.75 worker232
pod/deploy-xiuxian-66b55568cd-rcgrq 1/1 Running 0 4s 10.100.1.74 worker232
pod/deploy-xiuxian-66b55568cd-t9fth 1/1 Running 0 4s 10.100.1.77 worker232
pod/deploy-xiuxian-66b55568cd-znsnj 1/1 Running 0 4s 10.100.1.76 worker232
root@master231:~/manifests/deployments# kubectl delete rs deploy-xiuxian-66b55568cd
replicaset.apps “deploy-xiuxian-66b55568cd” deleted
root@master231:~/manifests/deployments# kubectl get deploy,rs,pods
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-xiuxian 5/5 5 5 3m46s
NAME DESIRED CURRENT READY AGE
replicaset.apps/deploy-xiuxian-66b55568cd 5 5 5 17s
NAME READY STATUS RESTARTS AGE
pod/deploy-xiuxian-66b55568cd-49qct 1/1 Running 0 17s
pod/deploy-xiuxian-66b55568cd-72swr 1/1 Running 0 17s
pod/deploy-xiuxian-66b55568cd-fp4ww 1/1 Running 0 17s
pod/deploy-xiuxian-66b55568cd-k2b9s 1/1 Running 0 17s
pod/deploy-xiuxian-66b55568cd-vmtsz 1/1 Running 0 17s
root@master231:~/manifests/deployments# kubectl delete deployments.apps deploy-xiuxian
deployment.apps “deploy-xiuxian” deleted
root@master231:~/manifests/deployments# kubectl get deploy,rs,pods
No resources found in default namespace.
根据以上结论可以得出deploy是基于rs来实现的
3.2.2 matchExpressions
root@master231:~/manifests/deployments# cat 02-deploy-matchExpressions-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-matchexpressions
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchExpressions:
– key: apps
values:
– xiuxian
– v1
– v2
operator: In
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/deployments# kubectl apply -f 02-deploy-matchExpressions-xiuxian.yaml
deployment.apps/deploy-xiuxian-matchexpressions created
root@master231:~/manifests/deployments# kubectl get deploy,rs,pods –show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/deploy-xiuxian-matchexpressions 5/5 5 5 95s xingming=zhangsan
NAME DESIRED CURRENT READY AGE LABELS
replicaset.apps/deploy-xiuxian-matchexpressions-66b55568cd 5 5 5 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
NAME READY STATUS RESTARTS AGE LABELS
pod/deploy-xiuxian-matchexpressions-66b55568cd-djfv6 1/1 Running 0 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
pod/deploy-xiuxian-matchexpressions-66b55568cd-fm4vp 1/1 Running 0 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
pod/deploy-xiuxian-matchexpressions-66b55568cd-hpf9k 1/1 Running 0 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
pod/deploy-xiuxian-matchexpressions-66b55568cd-w47xl 1/1 Running 0 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
pod/deploy-xiuxian-matchexpressions-66b55568cd-wmhzs 1/1 Running 0 95s apps=xiuxian,city=Sichuan,pod-template-hash=66b55568cd,xingming=lisi
root@master231:~/manifests/deployments# kubectl delete -f 02-deploy-matchExpressions-xiuxian.yaml
deployment.apps “deploy-xiuxian-matchexpressions” deleted
root@master231:~/manifests/deployments# kubectl get deploy,rs,pods
No resources found in default namespace.
root@master231:~/manifests/deployments#
4 对比rc,rs,deploy的关系
4.1 相同点
都能控制副本数量始终存活
4.2 不同点
1 rc标签选择器比较单一,key和value只能成对出现
2 rs相比rc而言,功能更加强大,key的values可以对应多个值,即一对多
3 deployment底层基于rs实现pod副本控制,相比于rc,rs而已,deploy还支持声明式更新
5 ds
5.1 ds概述
daemonset可以让每个worker节点有且仅有一个pod运行,也支持声明式更新
5.2 应用部署
root@master231:~/manifests# mkdir daemonsets
root@master231:~/manifests# cd daemonsets/
root@master231:~/manifests/daemonsets#
root@master231:~/manifests/daemonsets# cat 01-ds-xiuxian.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-xiuxain
labels:
xingming: zhangsan
spec:
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/daemonsets# kubectl apply -f 01-ds-xiuxian.yaml
daemonset.apps/ds-xiuxain created
root@master231:~/manifests/daemonsets# kubectl get ds,pods -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/ds-xiuxain 1 1 1 1 1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ds-xiuxain-2dwfp 1/1 Running 0 29s 10.100.1.99 worker232
6 jobs
6.1 概述
用于实现一次性任务
6.2 应用部署
root@master231:~/manifests# mkdir jobs
root@master231:~/manifests# cd jobs/
root@master231:~/manifests/jobs# vi 01-jobs-xiuxian.yaml
root@master231:~/manifests/jobs# cat 01-jobs-xiuxian.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: jobs-xiuxian
labels:
xingming: zhangsan
spec:
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
restartPolicy: OnFailure
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command:
– sleep
– “5”
root@master231:~/manifests/jobs# kubectl apply -f 01-jobs-xiuxian.yaml
job.batch/jobs-xiuxian created
root@master231:~/manifests/jobs# kubectl get jobs,pods
NAME COMPLETIONS DURATION AGE
job.batch/jobs-xiuxian 1/1 8s 15s
NAME READY STATUS RESTARTS AGE
pod/ds-xiuxain-2dwfp 1/1 Running 0 5m41s
pod/jobs-xiuxian-znfxb 0/1 Completed 0 15s
资源清单有个command,sleep 5 ,这里5秒后容器退出,所有status状态为完成
root@master231:~/manifests/jobs# kubectl delete ds ds-xiuxain
daemonset.apps “ds-xiuxain” deleted
root@master231:~/manifests/jobs# kubectl get pods
NAME READY STATUS RESTARTS AGE
jobs-xiuxian-znfxb 0/1 Completed 0 109s
root@master231:~/manifests/jobs# kubectl delete jobs.batch jobs-xiuxian
job.batch “jobs-xiuxian” deleted
root@master231:~/manifests/jobs# kubectl get pods
No resources found in default namespace.
root@master231:~/manifests/jobs#
7 cj
7.1 概述
cronjob底层基于jobs控制实现pod创建,用于定义周期性任务
7.2 应用部署
root@master231:~/manifests# mkdir cronjob
root@master231:~/manifests# cd cronjob/
root@master231:~/manifests/cronjob# vi 01-cj-xiuxian.yaml
root@master231:~/manifests/cronjob# cat 01-cj-xiuxian.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: cj-xiuxian
labels:
xingming: zhangsan
spec:
# 周期性调度Jobs控制器,参考: https://en.wikipedia.org/wiki/Cron
schedule: “* * * * *”
# 定义Job模板,而非Pod模板
jobTemplate:
spec:
# 定义Pod模板
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
restartPolicy: Never
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command:
– /bin/sh
– -c
– date -R; echo “test_cronjob”
root@master231:~/manifests/cronjob# kubectl apply -f 01-cj-xiuxian.yaml
cronjob.batch/cj-xiuxian created
root@master231:~/manifests/cronjob# kubectl get cj,jobs,pods
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/cj-xiuxian * * * * * False 0 9s 24s
NAME COMPLETIONS DURATION AGE
job.batch/cj-xiuxian-29068361 1/1 3s 9s
NAME READY STATUS RESTARTS AGE
pod/cj-xiuxian-29068361-qbl66 0/1 Completed 0 9s
root@master231:~/manifests/cronjob# kubectl logs pod/cj-xiuxian-29068361-qbl66
Tue, 08 Apr 2025 08:41:00 +0000
test_cronjob
root@master231:~/manifests/cronjob# kubectl get cj,jobs,pods
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/cj-xiuxian * * * * * False 0 51s 66s
NAME COMPLETIONS DURATION AGE
job.batch/cj-xiuxian-29068361 1/1 3s 51s
NAME READY STATUS RESTARTS AGE
pod/cj-xiuxian-29068361-qbl66 0/1 Completed 0 51s
root@master231:~/manifests/cronjob# kubectl logs pod/cj-xiuxian-29068361-qbl66
root@master231:~/manifests/cronjob# kubectl delete cj cj-xiuxian
cronjob.batch “cj-xiuxian” deleted
root@master231:~/manifests/cronjob# kubectl get pods
No resources found in default namespace.
root@master231:~/manifests/cronjob#
8 sts有状态服务
8.1 概述
K8S 1.9+引入,主要作用就是部署有状态服务,自带特性:
– 1.每个Pod有独立的存储;
– 2.每个Pod有唯一的网络标识;
– 3.有顺序的启动和停止Pod;
所谓的有状态服务,指的是服务在启动时逻辑并不相同,比如部署MySQL主从,第一个启动更多Pod逻辑和第二个Pod启动逻辑并不相通。
root@master231:~/manifests/statefulsets# cat 01-sts-xiuxian.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-apps
spec:
ports:
– port: 80
name: web
clusterIP: None
selector:
app: v1
—
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-xiuxian
spec:
selector:
matchLabels:
app: v1
# 和headless svc的名称保持一致
serviceName: “headless-apps”
# 升级策略
updateStrategy:
# 指定更新类型
type: RollingUpdate
# 配置滚动更新的参数
rollingUpdate:
# 指定要更新的分区数,0表示小于2的分区不更新
partition: 2
replicas: 5
# 定义Pod的创建和删除的顺序,有效值为: OrderedReady(default),Parallel。
# podManagementPolicy: Parallel
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: v1
spec:
initContainers:
– name: init01
image: harbor.ysl.com/ysl-xiuxian/apps:v1
volumeMounts:
– name: www
mountPath: /ceshi
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
– /bin/sh
– -c
– echo “www.yangsenlin.top —> ${POD_NAME}” > /ysl/index.html
containers:
– name: c1
# image: harbor.ysl.com/ysl-xiuxian/apps:v1
image: harbor.ysl.com/ysl-xiuxian/apps:v2
ports:
– containerPort: 80
name: web
volumeMounts:
– name: www
mountPath: /usr/share/nginx/html
# 卷申请模板,会为Pod自动绑定pvc
volumeClaimTemplates:
– metadata:
name: www
spec:
accessModes: [ “ReadWriteOnce” ]
storageClassName: “nfs-csi”
resources:
requests:
storage: 5Mi
limits:
storage: 20Mi
root@master231:~/manifests/statefulsets#
root@master231:~/manifests/statefulsets# kubectl apply -f 01-sts-xiuxian.yaml
service/headless-apps created
statefulset.apps/sts-xiuxian created
root@master231:~/manifests/statefulsets#
查看启动顺序即可
26 Pod调度
常见调度有
nodeName
hostPort
hostNetwork
resources
nodeslector
taints
tolerations
cordon
uncordon
drain
podaffinity
nodeaffinity
podantiaffinity
1 nodeName指定节点调度
root@master231:~/manifests# mkdir scheduler
root@master231:~/manifests# cd scheduler
root@master231:~/manifests/scheduler# vi 01-scheduler-nodeName.yaml
root@master231:~/manifests/scheduler# cat 01-scheduler-nodeName.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-nodename
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
nodeName: worker233
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/scheduler# kubectl apply -f 01-scheduler-nodeName.yaml
deployment.apps/deploy-xiuxian-nodename created
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-nodename-889c57b8b-b7xtp 0/1 Pending 0 24s
deploy-xiuxian-nodename-889c57b8b-dkdts 0/1 Pending 0 24s
deploy-xiuxian-nodename-889c57b8b-lccn6 0/1 Pending 0 24s
deploy-xiuxian-nodename-889c57b8b-rw2fg 0/1 Pending 0 24s
deploy-xiuxian-nodename-889c57b8b-w4cfb 0/1 Pending 0 24s
root@master231:~/manifests/scheduler#
虽然nodeName称为调度,但是并没有用到调度
root@master231:~/manifests/scheduler# kubectl describe deployments.apps deploy-xiuxian-nodename
….
Normal ScalingReplicaSet 2m26s deployment-controller Scaled up replica set deploy-xiuxian-nodename-889c57b8b to 5
看到最后一行并没有使用到scheduler
root@master231:~/manifests/scheduler# kubectl delete deployments.apps deploy-xiuxian-nodename
2 hostPort端口暴露
root@master231:~/manifests/scheduler# cat 02-scheduler-ports.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-ports
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
hostPort: 81
root@master231:~/manifests/scheduler# kubectl apply -f 02-scheduler-ports.yaml
deployment.apps/deploy-xiuxian-ports created
3 hostNerwork使用宿主机网络
root@master231:~/manifests/scheduler# cat 03-scheduler-hostNetwork.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-hostnetwork
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
hostNetwork: true
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
root@master231:~/manifests/scheduler# kubectl apply -f 03-scheduler-hostNetwork.yaml
deployment.apps/deploy-xiuxian-hostnetwork created
root@master231:~/manifests/scheduler# kubectl get deploy,rs,pods -o wide
4 resources
1 resources概述
表示可以对pod某个容器实现资源限制,可以配置pod调度的期望资源和使用上限
如果一个pod不配置资源上限,则默认会使用worker node节点的所有资源
2 reuqests定义期望资源
root@master231:/manifests/scheduler# cat 04-scheduler-resources-requests.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-resources
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: forestysl2020/ysl-linux-tools:v0.1
# 为容器分配一个标准输入,类似于docker run -i
stdin: true
# 为容器配置资源限制
resources:
# 表示用户的期望资源,符合期望才会被调度
requests:
cpu: 0.5
# memory: 1G
memory: 10G
root@master231:/manifests/scheduler# kubectl apply -f 04-scheduler-resources-requests.yaml
deployment.apps/deploy-xiuxian-resources created
root@master231:/manifests/scheduler# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-xiuxian-resources-754bc47779-czzsk 0/1 Pending 0 4s
deploy-xiuxian-resources-754bc47779-rmwp5 0/1 Pending 0 4s
deploy-xiuxian-resources-754bc47779-v8bh8 0/1 Pending 0 4s
deploy-xiuxian-resources-754bc47779-xv5jc 0/1 Pending 0 4s
deploy-xiuxian-resources-754bc47779-zbqjz 0/1 Pending 0 4s
因为memory配置的期望是10G,虚拟机并没有符合的,所有都处于pending状态
将memory: 10G修改为1G就可以了,但是只配置requests有个问题,比如,cpu和内存其实是可以超过这个期望的,可能会占用完所有宿主机资源,此时需要配合limits来配置
压力测试
压力测试命令
stress -m 5 –vm-bytes 200000000 –vm-keep –verbose
5 nodeselector 节点标签调度
1 概述
nodeselector可以就节点标签调度pod到对应的worker节点
2 应用部署
root@master231:~# kubectl label nodes worker232 xingming=zhangsan
node/worker232 labeled
root@master231:~# kubectl label nodes worker233 xingming=lisi
node/worker233 labeled
root@master231:~# kubectl get nodes -l xingming –show-labels
NAME STATUS ROLES AGE VERSION LABELS
worker232 Ready
worker233 Ready
[root@master231 scheduler]# cat 06-scheduler-nodeSelector.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-nodeselector
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
# 基于节点标签调度Pod
nodeSelector:
school: lisi
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
[root@master231 scheduler]# kubectl apply -f 06-scheduler-nodeSelector.yaml
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-nodeselector-bf8f56fbf-8td2f 1/1 Running 0 14s 10.100.2.33 worker233
deploy-xiuxian-nodeselector-bf8f56fbf-9mdp8 1/1 Running 0 14s 10.100.2.32 worker233
deploy-xiuxian-nodeselector-bf8f56fbf-fm6zl 1/1 Running 0 14s 10.100.2.34 worker233
deploy-xiuxian-nodeselector-bf8f56fbf-z4mww 1/1 Running 0 14s 10.100.2.31 worker233
deploy-xiuxian-nodeselector-bf8f56fbf-zpqn8 1/1 Running 0 14s 10.100.2.30 worker233
6 taints 污点
1 概述
taints翻译为污点,taints作用在worker nodejiedian
taints格式为“key[=value]:effect”
其中key和value可以自定义,但字符不得超过63个,其中value可以省略不写
effect有3中类型
NoSchedule
不接受新的Pod,已经调度到该节点的Pod并不驱逐。
PreferNoSchedule:
尽可能调度到其他节点,当其他节点不满足时,再调度到当前节点。
NoExecute:
不接受新的Pod调度,与此同时,还会驱逐已经调度到给节点的所有Pod,不推荐使用。
温馨提示:
只要effect不同,尽管key和value相同,则表示2个不同的污点。
2 查看污点
root@master231:~/manifests/scheduler# kubectl describe nodes| grep -i taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
3 打污点
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming=zhangsan:NoSchedule
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints: xingming=zhangsan:NoSchedule
root@master231:~/manifests/scheduler# kubectl taint nodes –all city=Sichuan:PreferNoSchedulenode/master231 tainted
node/worker232 tainted
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint -A2
Taints: node-role.kubernetes.io/master:NoSchedule
city=Sichuan:PreferNoSchedule
Unschedulable: false
—
Taints: city=Sichuan:PreferNoSchedule
Unschedulable: false
Lease:
—
Taints: xingming=zhangsan:NoSchedule
city=Sichuan:PreferNoSchedule
Unschedulable: false
4 删除污点
在key后面加个减号-
root@master231:~/manifests/scheduler# kubectl taint node –all city-
node/master231 untainted
node/worker232 untainted
node/worker233 untainted
root@master231:~/manifests/scheduler# kubectl taint nodes worker232 xingming-
error: taint “xingming” not found
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming-
node/worker233 untainted
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
5 修改污点
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming=zhangsan:NoSchedule
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming=zhangsan:NoExecute
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl describe nodes worker233| grep -i taint
Taints: xingming=zhangsan:NoExecute
root@master231:~/manifests/scheduler#
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming=lisi:NoExecute –overwrite
node/worker233 modified
root@master231:~/manifests/scheduler# kubectl describe nodes worker233| grep -i taint
Taints: xingming=lisi:NoExecute
6 清理污点
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming-
node/worker233 untainted
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
root@master231:~/manifests/scheduler#
7 验证污点
root@master231:~/manifests/scheduler# cat 07-scheduler-Taints.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-taints
labels:
xingming: zhangsan
spec:
replicas: 10
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
resources:
requests:
cpu: 0.2
memory: 500Mi
8 tolerations污点容忍
1 概述
一个pod想要调度到一个有污点的worker node上,则该pod需要容忍该污点
2 打污点
root@master231:~/manifests/scheduler# kubectl taint node worker232 xingming=zhangsan:NoSchedule
node/worker232 tainted
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 xingming=lisi:NoExecute
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl taint nodes worker233 city=Sichuan:NoSchedule
node/worker233 tainted
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taints -A2
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
—
Taints: xingming=zhangsan:NoSchedule
Unschedulable: false
Lease:
—
Taints: xingming=lisi:NoExecute
city=Sichuan:NoSchedule
Unschedulable: false
root@master231:~/manifests/scheduler#
3 编写资源清单
root@master231:~/manifests/scheduler# cat 08-scheduler-tolerations.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-tolerations
labels:
xingming: zhangsan
spec:
replicas: 10
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: Sichuan
xingming: lisi
spec:
# 配置污点容忍
tolerations:
# 匹配污点的key
– key: node-role.kubernetes.io/master
# 匹配污点的effect
effect: NoSchedule
# 表示key和value之间的关系,有效值为:Exists,Equal(default)。
operator: Equal
– key: school
value: laonanhai
effect: NoExecute
operator: Equal
– key: class
value: test
effect: NoSchedule
operator: Equal
– key: school
value: yangsenlin
effect: NoSchedule
# 无视任何污点,生产环境不推荐使用,临时测试可以.
#tolerations:
#- operator: Exists
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
resources:
requests:
cpu: 0.2
memory: 500Mi
4 测试
9 cordon
1 概述
标记节点不可调度,与此同时会给节点打上污点
2 cordon测试
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint -A2
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
—
Taints:
Unschedulable: false
Lease:
—
Taints:
Unschedulable: false
Lease:
root@master231:~/manifests/scheduler# kubectl get pods
No resources found in default namespace.
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d2h v1.23.17
worker232 Ready
worker233 Ready
root@master231:~/manifests/scheduler# kubectl cordon worker232
node/worker232 cordoned
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d2h v1.23.17
worker232 Ready,SchedulingDisabled
worker233 Ready
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint -A2
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
—
Taints: node.kubernetes.io/unschedulable:NoSchedule
Unschedulable: true
Lease:
—
Taints:
Unschedulable: false
Lease:
10 uncordon
1 概述
和cordon相反
2 uncordon测试
root@master231:~/manifests/scheduler# kubectl uncordon worker232
node/worker232 uncordoned
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d2h v1.23.17
worker232 Ready
worker233 Ready
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
11 drain 驱逐
1 概述
drain 可以驱逐已经调度到节点的pod,底层会调用cordon标记节点不可调度
2 drain 测试
root@master231:~/manifests/scheduler# kubectl apply -f 07-scheduler-Taints.yaml
root@master231:~/manifests/scheduler# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-xiuxian-taints-55d8ff8b5c-257sk 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-2lmn2 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-57fkn 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-87f79 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-gzj77 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-j7m7m 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-lmckj 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-r95nl 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-s7bzs 1/1 Running 0 21s
deploy-xiuxian-taints-55d8ff8b5c-x2n8w 1/1 Running 0 21s
root@master231:~/manifests/scheduler#
root@master231:~/manifests/scheduler# kubectl drain worker233
node/worker233 cordoned
error: unable to drain node “worker233” due to error:cannot delete DaemonSet-managed Pods (use –ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-k7dp4, kube-system/kube-proxy-hn6g6, continuing command…
There are pending nodes to be drained:
worker233
cannot delete DaemonSet-managed Pods (use –ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-k7dp4, kube-system/kube-proxy-hn6g6
root@master231:~/manifests/scheduler# kubectl drain worker233 –ignore-daemonsets
node/worker233 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-k7dp4, kube-system/kube-proxy-hn6g6
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-x2n8w
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-57fkn
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-gzj77
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-j7m7m
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-r95nl
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-s7bzs
pod/deploy-xiuxian-taints-55d8ff8b5c-gzj77 evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-j7m7m evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-x2n8w evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-r95nl evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-s7bzs evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-57fkn evicted
node/worker233 drained
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-taints-55d8ff8b5c-257sk 1/1 Running 0 2m23s 10.100.1.112 worker232
deploy-xiuxian-taints-55d8ff8b5c-2lmn2 1/1 Running 0 2m23s 10.100.1.115 worker232
deploy-xiuxian-taints-55d8ff8b5c-468z4 1/1 Running 0 39s 10.100.1.118 worker232
deploy-xiuxian-taints-55d8ff8b5c-4wtr5 0/1 Pending 0 39s
deploy-xiuxian-taints-55d8ff8b5c-87f79 1/1 Running 0 2m23s 10.100.1.114 worker232
deploy-xiuxian-taints-55d8ff8b5c-dspkp 1/1 Running 0 39s 10.100.1.116 worker232
deploy-xiuxian-taints-55d8ff8b5c-fsqfk 0/1 Pending 0 39s
deploy-xiuxian-taints-55d8ff8b5c-khnvf 1/1 Running 0 39s 10.100.1.117 worker232
deploy-xiuxian-taints-55d8ff8b5c-lmckj 1/1 Running 0 2m23s 10.100.1.113 worker232
deploy-xiuxian-taints-55d8ff8b5c-z5nxm 0/1 Pending 0 39s
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d3h v1.23.17
worker232 Ready
worker233 Ready,SchedulingDisabled
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
default deploy-xiuxian-taints-55d8ff8b5c-257sk 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 3m
default deploy-xiuxian-taints-55d8ff8b5c-2lmn2 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 3m
default deploy-xiuxian-taints-55d8ff8b5c-468z4 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 76s
default deploy-xiuxian-taints-55d8ff8b5c-87f79 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 3m
default deploy-xiuxian-taints-55d8ff8b5c-dspkp 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 76s
default deploy-xiuxian-taints-55d8ff8b5c-khnvf 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 76s
default deploy-xiuxian-taints-55d8ff8b5c-lmckj 200m (10%) 0 (0%) 500Mi (13%) 0 (0%) 3m
Taints: node.kubernetes.io/unschedulable:NoSchedule
12 nodeaffinity 节点亲和力
1 概述
nodeAffinity 表示pod调度到期望节点
2 应用部署
2.1 给节点打标签
root@master231:~/manifests/scheduler# kubectl label nodes master231 dc=shanghai
node/master231 labeled
root@master231:~/manifests/scheduler# kubectl label nodes worker232 dc=Sichuan
node/worker232 labeled
root@master231:~/manifests/scheduler# kubectl label nodes worker233 dc=shenzhen
node/worker233 labeled
root@master231:~/manifests/scheduler# kubectl get nodes –show-labels | grep dc
master231 Ready control-plane,master 5d15h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shanghai,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
worker232 Ready
worker233 Ready
root@master231:~/manifests/scheduler#
2.2 使用nodeAffinity实现nodeSelector功能
root@master231:~/manifests/scheduler# kubectl explain deployment
root@master231:~/manifests/scheduler# cat 09-scheduler-nodeAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-nodeaffinity
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: cq
xm: lisi
spec:
tolerations:
– operator: Exists
# 定义亲和性
affinity:
# 定义节点亲和性
nodeAffinity:
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
# 定义节点选择器
nodeSelectorTerms:
# 基于表达式匹配节点
– matchExpressions:
# 定义节点标签的key
– key: dc
# 定义节点标签的value
values:
– Sichuan
# 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and
operator: In
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
2.3 部署应用
root@master231:~/manifests/scheduler# kubectl apply -f 09-scheduler-nodeAffinity.yaml
deployment.apps/deploy-xiuxian-nodeaffinity created
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-nodeaffinity-7df457c957-hg2s2 1/1 Running 0 13s 10.100.1.18 worker232
deploy-xiuxian-nodeaffinity-7df457c957-w58rb 1/1 Running 0 13s 10.100.1.19 worker232
deploy-xiuxian-nodeaffinity-7df457c957-w8zfl 1/1 Running 0 13s 10.100.1.20 worker232
deploy-xiuxian-nodeaffinity-7df457c957-wfmfl 1/1 Running 0 13s 10.100.1.21 worker232
deploy-xiuxian-nodeaffinity-7df457c957-zxjgr 1/1 Running 0 13s 10.100.1.22 worker232
2.4 总结
此处可以看出,配置nodeaffinity后,第一个pod部署在哪一个节点上,后面的pod都会在同一个节点部署,但是第一个pod是不确认部署哪个节点的
3 使用nodeaffinity实现超越nodeselector功能,支持多值匹配
3.1编写资源清单并查看节点标签
root@master231:~/manifests/scheduler# cat 09-scheduler-nodeAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-nodeaffinity
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: cq
xm: lisi
spec:
tolerations:
– operator: Exists
# 定义亲和性
affinity:
# 定义节点亲和性
nodeAffinity:
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
# 定义节点选择器
nodeSelectorTerms:
# 基于表达式匹配节点
– matchExpressions:
# 定义节点标签的key
– key: dc
# 定义节点标签的value
values:
– Sichuan
– shanghai
# 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
operator: In
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/scheduler# kubectl get nodes -l dc=Sichuan
NAME STATUS ROLES AGE VERSION
worker232 Ready
root@master231:~/manifests/scheduler# kubectl get nodes -l dc=shanghai
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 5d15h v1.23.17
3.2 部署应用
root@master231:~/manifests/scheduler# kubectl apply -f 09-scheduler-nodeAffinity.yaml
deployment.apps/deploy-xiuxian-nodeaffinity created
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-nodeaffinity-5849dbdb8-5gfpk 1/1 Running 0 13s 10.100.0.24 master231
deploy-xiuxian-nodeaffinity-5849dbdb8-b9xps 1/1 Running 0 13s 10.100.1.24 worker232
deploy-xiuxian-nodeaffinity-5849dbdb8-hpbtx 1/1 Running 0 13s 10.100.1.25 worker232
deploy-xiuxian-nodeaffinity-5849dbdb8-kj9w6 1/1 Running 0 13s 10.100.0.25 master231
deploy-xiuxian-nodeaffinity-5849dbdb8-sj7rf 1/1 Running 0 13s 10.100.1.23 worker232
3.3 查看污点
root@master231:~/manifests/scheduler# kubectl describe nodes | grep -i taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
3.4 总结
nodeAffinity可以实现多标签匹配
13 PodAffinity POD亲和力
13.1 概述
podAffinity在调度pod时可以基于拓扑域进行调度,当某个pod调度到特定的拓扑域后,后续的所有pod都往该拓扑域调度
13.2 应用部署
13.2.1 查看标签
root@master231:~/manifests/scheduler# kubectl get nodes –show-labels -l “dc in(Sichuan,shanghai)”
NAME STATUS ROLES AGE VERSION LABELS
master231 Ready control-plane,master 5d15h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shanghai,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
worker232 Ready
13.2.2 编写资源清单
root@master231:~/manifests/scheduler# cat 10-scheduler-podAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-podaffinity
labels:
xingming: zhangsan
spec:
replicas: 10
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: cq
xm: lisi
spec:
tolerations:
– operator: Exists
# 定义亲和性
affinity:
# 定义Pod亲和性
podAffinity:
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
# 定义拓扑域(“机房”)
– topologyKey: dc
# 定义Pod标签选择器
labelSelector:
# 基于表达式匹配节点
matchExpressions:
# 定义节点标签的key
– key: apps
# 定义节点标签的value
values:
– xiuxian
# 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
operator: In
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/scheduler#
root@master231:~/manifests/scheduler# kubectl apply -f 10-scheduler-podAffinity.yaml
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-podaffinity-74c6558c8f-2xz84 1/1 Running 0 31s 10.100.2.70 worker233
deploy-xiuxian-podaffinity-74c6558c8f-4ggrt 1/1 Running 0 31s 10.100.2.71 worker233
deploy-xiuxian-podaffinity-74c6558c8f-5pf4c 1/1 Running 0 31s 10.100.2.72 worker233
deploy-xiuxian-podaffinity-74c6558c8f-6b64b 1/1 Running 0 31s 10.100.2.64 worker233
deploy-xiuxian-podaffinity-74c6558c8f-d7skb 1/1 Running 0 31s 10.100.2.69 worker233
deploy-xiuxian-podaffinity-74c6558c8f-drbmg 1/1 Running 0 31s 10.100.2.66 worker233
deploy-xiuxian-podaffinity-74c6558c8f-jvxgh 1/1 Running 0 31s 10.100.2.68 worker233
deploy-xiuxian-podaffinity-74c6558c8f-kd6f2 1/1 Running 0 31s 10.100.2.63 worker233
deploy-xiuxian-podaffinity-74c6558c8f-nt6pr 1/1 Running 0 31s 10.100.2.65 worker233
deploy-xiuxian-podaffinity-74c6558c8f-s6ds8 1/1 Running 0 31s 10.100.2.67 worker233
14 podantiaffinity pod反亲和性
14.1 概述
在调度Pod时可以基于拓扑域进行调度,当某个Pod调度到特定的拓扑域后,后续的Pod都不能往该拓扑域调度。
14.2 应用部署
root@master231:~/manifests/scheduler# cat 11-scheduler-podAntiAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-podantiaffinity
labels:
xingming: zhangsan
spec:
replicas: 5
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
city: cq
xm: lisi
spec:
tolerations:
– operator: Exists
# 定义亲和性
affinity:
# 定义Pod反亲和性
podAntiAffinity:
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
# 定义拓扑域(“机房”)
– topologyKey: dc
# 定义Pod标签选择器
labelSelector:
# 基于表达式匹配节点
matchExpressions:
# 定义节点标签的key
– key: apps
# 定义节点标签的value
values:
– xiuxian
# 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
operator: In
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:~/manifests/scheduler# kubectl apply -f 11-scheduler-podAntiAffinity.yaml
deployment.apps/deploy-xiuxian-podantiaffinity created
root@master231:~/manifests/scheduler#
14.3 测试验证
root@master231:~/manifests/scheduler# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-podantiaffinity-df6d7bfbf-c97z7 0/1 Pending 0 38s
deploy-xiuxian-podantiaffinity-df6d7bfbf-cxctb 0/1 Pending 0 38s
deploy-xiuxian-podantiaffinity-df6d7bfbf-hb95h 1/1 Running 0 38s 10.100.1.26 worker232
deploy-xiuxian-podantiaffinity-df6d7bfbf-hg5pz 1/1 Running 0 38s 10.100.0.26 master231
deploy-xiuxian-podantiaffinity-df6d7bfbf-sfkqh 1/1 Running 0 38s 10.100.2.83 worker233
15 扩展
– limits
https://www.cnblogs.com/ysl/p/17968870
– quota
https://www.cnblogs.com/ysl/p/17955506
27 集群的扩容和缩容
1 缩容
k8s集群节点下线流程
1 驱逐已经调度到节点的pod
2 worker节点停止kubelet组件并禁止开机启动
3 worker节点重置环境或备份
4 重新安装操作系统
5 master节点删除worker节点
6 缩容测试
6.1 驱逐
root@master231:~/manifests/scheduler# kubectl drain worker232 –ignore-daemonsets
node/worker232 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-bx879, kube-system/kube-proxy-tqc9v
evicting pod kube-system/coredns-6d8c4cb4d-v7bjz
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-87f79
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-257sk
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-2lmn2
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-468z4
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-khnvf
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-dspkp
evicting pod default/deploy-xiuxian-taints-55d8ff8b5c-lmckj
evicting pod kube-system/coredns-6d8c4cb4d-gf2ps
pod/deploy-xiuxian-taints-55d8ff8b5c-87f79 evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-2lmn2 evicted
I0408 22:38:06.327155 32824 request.go:685] Waited for 1.193868197s due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.231:6443/api/v1/namespaces/default/pods/deploy-xiuxian-taints-55d8ff8b5c-257sk
pod/deploy-xiuxian-taints-55d8ff8b5c-257sk evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-lmckj evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-dspkp evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-khnvf evicted
pod/deploy-xiuxian-taints-55d8ff8b5c-468z4 evicted
pod/coredns-6d8c4cb4d-v7bjz evicted
pod/coredns-6d8c4cb4d-gf2ps evicted
node/worker232 drained
6.2 停止kubelet并禁止开机启动
root@worker232:~# systemctl disable –now kubelet
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
root@worker232:~#
6.3 节点重置
root@worker232:~# kubeadm reset -f
6.4 worker节点重新安装操作系统
此处为测试环境,不重新安装操作系统,等下扩容还需要使用
6.5 master节点删除worker节点
root@master231:~/manifests/scheduler# kubectl delete nodes worker232
node “worker232” deleted
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d3h v1.23.17
worker233 Ready
2 节点扩容
kubeadm实现token管理
1 创建并查看token
root@master231:~/manifests/scheduler# kubeadm token create
xdid4z.e23pjdhn1w90qzl7
root@master231:~/manifests/scheduler# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
xdid4z.e23pjdhn1w90qzl7 23h 2025-04-09T14:45:42Z authentication,signing
2 删除token
root@master231:~/manifests/scheduler# kubeadm token delete xdid4z
bootstrap token “xdid4z” deleted
root@master231:~/manifests/scheduler# kubeadm token list
root@master231:~/manifests/scheduler#
3 创建自定义token
root@master231:~/manifests/scheduler# kubeadm token create –ttl 0 –print-join-command
kubeadm join 10.0.0.231:6443 –token 74lbdb.8wmb24q5xtjarbps –discovery-token-ca-cert-hash sha256:829b9f6b642b548ec4e311028eb6d56c04f3f52240804aa631337ab43165af4d
4 新建worker232
– 1.K8S环境准备,禁用swap,开启内核参数,核心数量,安装软件包(docker,kubelet,kubeadm,kubectl)等;
– 2.将kubelet设置为开机自启动;
– 3.worker节点加入集群 —》kubeadm join
– 4.master查看node是否加入集群;
在worker232节点使用
kubeadm join 10.0.0.231:6443 –token 74lbdb.8wmb24q5xtjarbps –discovery-token-ca-cert-hash sha256:829b9f6b642b548ec4e311028eb6d56c04f3f52240804aa631337ab43165af4d
5 master节点查看集群
root@master231:~/manifests/scheduler# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 4d3h v1.23.17
worker232 Ready
worker233 Ready
28 pod创建流程
1.执行kubectl命令时会加载”~/.kube/config”,从而识别到apiserver的地址,端口及认证证书;
2.apiserver进行证书认证,鉴权,语法检查,若成功则可以进行数据的读取或者写入;
3.若用户是写入操作(创建,修改,删除)则需要修改etcd数据库的信息;
4.如果创建Pod,此时scheduler负责Pod调度,将Pod调度到合适的worker节点,并将结果返回给ApiServer,由apiServer负责存储到etcd中;
5.kubelet组件会周期性上报给apiServer节点,包括Pod内的容器资源(cpu,memory,disk,gpu,…)及worker宿主机节点状态,apiServer并将结果存储到etcd中,若有该节点的任务也会直接返回给该节点进行调度;
6.kubelet开始调用CRI接口创建容器(依次创建pause,initContainers,containers);
7.在运行过程中,若Pod容器,正常或者异常退出时,kubelet会根据重启策略是否重启容器(Never,Always,OnFailure);
8.若一个节点挂掉,则需要controller manager介入维护,比如Pod副本数量缺失,则需要创建watch事件,要求控制器的副本数要达到标准,从而要创建新的Pod,此过程重复步骤4-6。
29 pod的端口转发
1 环境准备
root@master231:~/manifests/scheduler# kubectl apply -f 11-scheduler-podAntiAffinity.yaml
root@master231:~/manifests/scheduler# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-xiuxian-podantiaffinity-df6d7bfbf-c97z7 0/1 Pending 0 150m
deploy-xiuxian-podantiaffinity-df6d7bfbf-cxctb 0/1 Pending 0 150m
deploy-xiuxian-podantiaffinity-df6d7bfbf-hb95h 1/1 Running 0 150m
deploy-xiuxian-podantiaffinity-df6d7bfbf-hg5pz 1/1 Running 0 150m
deploy-xiuxian-podantiaffinity-df6d7bfbf-sfkqh 1/1 Running 0 150m
2 配置端口转发
root@master231:~/manifests/scheduler# kubectl port-forward deploy-xiuxian-podantiaffinity-df6d7bfbf-hb95h –address=0.0.0.0 81:80
3 测试
http://192.168.137.231:81
4 总结
用于临时测试使用,响应式暴露服务
5 现已接触到的端口转发
hostPorts 端口暴露
port-forward 响应式暴露端口
hostNetwork 使用宿主机网络
30 发布策略
1 灰度发布|金丝雀发布
特点:
新版本逐渐替代旧版本,在运行过程中新版本和旧版本共存
但最终会保留一个版本
2 蓝绿部署
特点
同步部署两套集群环境,但仅有一套环境对外提供服务
3 A|B测试
特点
针对一部分用户进行升级,或者说将一部分用户进行新版本体验
1 灰度发布
1.1 应用部署
root@master231:~/manifests# cd /manifests/
root@master231:/manifests# mkdir huidu
root@master231:/manifests# cd huidu/
root@master231:/manifests/huidu#
root@master231:/manifests/huidu# cat 01-deploy-xiuxian-old.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-old
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
version: v1
template:
metadata:
labels:
apps: xiuxian
version: v1
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
root@master231:/manifests/huidu# kubectl apply -f 01-deploy-xiuxian-old.yaml
root@master231:/manifests/huidu# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-old-568cf47956-j7h2k 1/1 Running 0 5m26s 10.100.2.84 worker233
deploy-xiuxian-old-568cf47956-jmhrp 1/1 Running 0 5m26s 10.100.1.27 worker232
deploy-xiuxian-old-568cf47956-wxprj 1/1 Running 0 5m26s 10.100.2.85 worker233
root@master231:/manifests/huidu#
1.2 定义svc
1.2.1 编写资源清单
root@master231:/manifests/huidu# kubectl explain service
root@master231:/manifests/huidu# kubectl apply -f 02-svc-xiuxian.yaml
service/svc-xiuxian created
root@master231:/manifests/huidu# cat 02-svc-xiuxian.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian
spec:
#定义标签选择器
selector:
apps: xiuxian
#关联svc到pods的端口映射
ports:
#svc自身监听的端口
– port: 81
#反向代理到后端的pod端口
targetPort: 80
1.2.2 查看svc资源
root@master231:/manifests/huidu# kubectl get svc svc-xiuxian
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-xiuxian ClusterIP 10.200.224.60
root@master231:/manifests/huidu# kubectl describe svc svc-xiuxian | egrep -i ‘ip|endpoint’
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.224.60
IPs: 10.200.224.60
Endpoints: 10.100.1.27:80,10.100.2.84:80,10.100.2.85:80
1.2.3 测试svc有负载均衡的能力
root@master231:/manifests/huidu# kubectl exec -it deploy-xiuxian-old-568cf47956-j7h2k — sh
/ # echo 1111111111111 > /usr/share/nginx/html/index.html
/ # exit
root@master231:/manifests/huidu# kubectl exec -it deploy-xiuxian-old-568cf47956-jmhrp — sh
/ # echo 22222222222 > /usr/share/nginx/html/index.html
/ # exit
root@master231:/manifests/huidu# kubectl exec -it deploy-xiuxian-old-568cf47956-wxprj — sh
/ # echo 3333333333 > /usr/share/nginx/html/index.html
/ # exit
root@master231:/manifests/huidu# while true ; do curl 10.200.224.60:81;sleep 0.5;done
22222222222
1111111111111
22222222222
22222222222
22222222222
22222222222
1111111111111
22222222222
22222222222
3333333333
1.2.4 验证svc的服务发现功能
root@master231:/manifests/huidu# kubectl delete pods –all
pod “deploy-xiuxian-old-568cf47956-j7h2k” deleted
pod “deploy-xiuxian-old-568cf47956-jmhrp” deleted
pod “deploy-xiuxian-old-568cf47956-wxprj” deleted
root@master231:/manifests/huidu# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-xiuxian-old-568cf47956-bj7qt 1/1 Running 0 34s
deploy-xiuxian-old-568cf47956-pf4kr 1/1 Running 0 34s
deploy-xiuxian-old-568cf47956-q4m6d 1/1 Running 0 34s
root@master231:/manifests/huidu# curl 10.200.224.60:81
凡人修仙传 v1

root@master231:/manifests/huidu#
1.2.5 总结
1 svc有服务发现的作用,删除pod重新创建后能够自动关联新的pod
2 svc拥有负载均衡的能力
3 为客户端提供统一的访问入口
1.3 灰度发布流程
1 先部署旧版本(3副本)
2 部署svc关联旧版本
3 部署新版本(3副本)
4 将旧版本的副本数从3-0,将新版本的副本数从1-3
5 删除旧版本控制器
1.3.1 部署新版本
while true ; do curl 10.200.224.60:81;sleep 0.5;done
…
此时发现新版本和旧版本共存的现象。
1.3.2 修改01和03资源清单的副本数量
部署应用
root@master231:/manifests/huidu# cat 01-deploy-xiuxian-old.yaml 03-deploy-xiuxian-new.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-old
spec:
replicas: 2
selector:
matchLabels:
apps: xiuxian
version: v1
template:
metadata:
labels:
apps: xiuxian
version: v1
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-new
spec:
replicas: 2
selector:
matchLabels:
apps: xiuxian
version: v2
template:
metadata:
labels:
apps: xiuxian
version: v2
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
root@master231:/manifests/huidu# kubectl get pods -o wide –show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
deploy-xiuxian-new-845ffc675b-8fxgc 1/1 Running 0 12m 10.100.2.88 worker233
deploy-xiuxian-new-845ffc675b-f5l79 1/1 Running 0 34s 10.100.1.29 worker232
deploy-xiuxian-old-568cf47956-pf4kr 1/1 Running 0 27m 10.100.2.86 worker233
deploy-xiuxian-old-568cf47956-q4m6d 1/1 Running 0 27m 10.100.1.28 worker232
以此一直到所有的01的副本配置为0,03的副本配置为3,最后将01资源删除即可
2 蓝绿部署
2.1 流程
1 部署旧版本
2 通过svc关联旧版本
3 部署新版本
4 将svc指向新版本
1 部署旧版本
root@master231:/manifests/blue-green# cat 01-deploy-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-blue
spec:
replicas: 5
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
2 通过svc关联旧版本
root@master231:/manifests/blue-green# cat 02-svc-xiuxian.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian
spec:
selector:
apps: v1
ports:
– port: 81
targetPort: 80
root@master231:/manifests/blue-green# kubectl get -f 02-svc-xiuxian.yaml -f 02-svc-xiuxian.yaml
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-xiuxian ClusterIP 10.200.224.60
svc-xiuxian ClusterIP 10.200.224.60
root@master231:/manifests/blue-green# kubectl get -f 01-deploy-blue.yaml -f 02-svc-xiuxian.yaml
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-xiuxian-blue 5/5 5 5 100s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/svc-xiuxian ClusterIP 10.200.224.60
root@master231:/manifests/blue-green# kubectl get svc,pods -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.200.0.1
service/svc-xiuxian ClusterIP 10.200.224.60
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/deploy-xiuxian-blue-5d88dd4cfb-4f7b7 1/1 Running 0 2m21s 10.100.2.91 worker233
pod/deploy-xiuxian-blue-5d88dd4cfb-knfhv 1/1 Running 0 2m21s 10.100.1.31 worker232
pod/deploy-xiuxian-blue-5d88dd4cfb-n6vwd 1/1 Running 0 2m21s 10.100.2.89 worker233
pod/deploy-xiuxian-blue-5d88dd4cfb-ppd96 1/1 Running 0 2m21s 10.100.1.30 worker232
pod/deploy-xiuxian-blue-5d88dd4cfb-t6qft 1/1 Running 0 2m21s 10.100.2.90 worker233
3 访问测试
root@master231:/manifests/blue-green# while true ; do curl 10.200.224.60:81;sleep 0.5;done
凡人修仙传 v1

4 部署新版本
root@master231:/manifests/blue-green# kubectl apply -f 03-deploy-xiuxian-green.yaml
deployment.apps/deploy-xiuxian-green created
root@master231:/manifests/blue-green# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-blue-5d88dd4cfb-4f7b7 1/1 Running 0 13m 10.100.2.91 worker233
deploy-xiuxian-blue-5d88dd4cfb-knfhv 1/1 Running 0 13m 10.100.1.31 worker232
deploy-xiuxian-blue-5d88dd4cfb-n6vwd 1/1 Running 0 13m 10.100.2.89 worker233
deploy-xiuxian-blue-5d88dd4cfb-ppd96 1/1 Running 0 13m 10.100.1.30 worker232
deploy-xiuxian-blue-5d88dd4cfb-t6qft 1/1 Running 0 13m 10.100.2.90 worker233
deploy-xiuxian-green-54c7bddb9b-6bb6j 1/1 Running 0 13s 10.100.2.92 worker233
deploy-xiuxian-green-54c7bddb9b-cksbs 1/1 Running 0 13s 10.100.1.32 worker232
deploy-xiuxian-green-54c7bddb9b-jh92r 1/1 Running 0 13s 10.100.2.93 worker233
deploy-xiuxian-green-54c7bddb9b-lzlpv 1/1 Running 0 13s 10.100.1.33 worker232
deploy-xiuxian-green-54c7bddb9b-zgs2z 1/1 Running 0 13s 10.100.2.94 worker233
root@master231:/manifests/blue-green# cat 03-deploy-xiuxian-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-green
spec:
replicas: 5
selector:
matchLabels:
apps: v2
template:
metadata:
labels:
apps: v2
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
root@master231:/manifests/blue-green#
5 将svc指向新版本
root@master231:/manifests/blue-green# kubectl get svc,pods -o wide –show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS
service/kubernetes ClusterIP 10.200.0.1
service/svc-xiuxian ClusterIP 10.200.224.60
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod/deploy-xiuxian-blue-5d88dd4cfb-4f7b7 1/1 Running 0 15m 10.100.2.91 worker233
pod/deploy-xiuxian-blue-5d88dd4cfb-knfhv 1/1 Running 0 15m 10.100.1.31 worker232
pod/deploy-xiuxian-blue-5d88dd4cfb-n6vwd 1/1 Running 0 15m 10.100.2.89 worker233
pod/deploy-xiuxian-blue-5d88dd4cfb-ppd96 1/1 Running 0 15m 10.100.1.30 worker232
pod/deploy-xiuxian-blue-5d88dd4cfb-t6qft 1/1 Running 0 15m 10.100.2.90 worker233
pod/deploy-xiuxian-green-54c7bddb9b-6bb6j 1/1 Running 0 2m1s 10.100.2.92 worker233
pod/deploy-xiuxian-green-54c7bddb9b-cksbs 1/1 Running 0 2m1s 10.100.1.32 worker232
pod/deploy-xiuxian-green-54c7bddb9b-jh92r 1/1 Running 0 2m1s 10.100.2.93 worker233
pod/deploy-xiuxian-green-54c7bddb9b-lzlpv 1/1 Running 0 2m1s 10.100.1.33 worker232
pod/deploy-xiuxian-green-54c7bddb9b-zgs2z 1/1 Running 0 2m1s 10.100.2.94 worker233
此时apps=v1是没有对外提供业务的
6 再次查看访问的数据
root@master231:/manifests/blue-green# curl 10.200.224.60:81
此时都是V2版本
3 istio实现A|B测试
3.1 概述
Istio
Istio是Google、IBM和Lyft联合开源的微服务Service Mesh框架,旨在解决大量微服务的发现、连接、管理、监控以及安全等问题。
Istio的主要特性包括:
– HTTP、gRPC和TCP网络流量的自动负载均衡
– 丰富的路由规则,细粒度的网络流量行为控制
– 流量加密、服务间认证,以及强身份声明
– 全范围(Fleet-wide)策略执行
– 深度遥测和报告
Istio是2017年5月发布的第一个的0.1版本,目的是为了抢占市场。
但是在2018年7月才发布了1.0版本,而在2020年的3月发布了1.5版本。2022年的2月份发布了1.13版本。2023年2月份发布了1.17版本。2023年9月份发布了1.20版本。
3.2 istio个版本支持的k8s版本
参考地址:
https://istio.io/latest/docs/releases/supported-releases/#support-status-of-istio-releases
3.3 下载istio软件包
wget https://github.com/istio/istio/releases/download/1.17.8/istio-1.17.8-linux-amd64.tar.gz
3.4 解压软件包
[root@master231 istio]# tar xf istio-1.17.8-linux-amd64.tar.gz
3.5 配置istioctl工具环境变量
[root@master231 istio-1.17.8]# cat /etc/profile.d/istio.sh
export PATH=$PATH:/manifests/add-ones/istio/istio-1.17.8/bin
[root@master231 istio-1.17.8]# source /etc/profile.d/istio.sh
[root@master231 istio-1.17.8]# istioctl –help
Istio configuration command line utility for service operators to
debug and diagnose their Istio mesh.
Usage:
istioctl [command]
…..
3.6 安装istio
在安装 Istio 时所能够使用的内置配置文件。这些配置文件提供了对Istio控制平面和Istio数据平面Sidecar的定制内容。
您可以从Istio内置配置文件的其中一个开始入手,然后根据您的特定需求进一步自定义配置文件。当前提供以下几种内置配置文件:
– default:
根据 IstioOperator API 的默认设置启动组件。
建议用于生产部署和 Multicluster Mesh 中的 Primary Cluster。
您可以运行 istioctl profile dump 命令来查看默认设置。
– demo:
这一配置具有适度的资源需求,旨在展示 Istio 的功能。
它适合运行 Bookinfo 应用程序和相关任务。 这是通过快速开始指导安装的配置。
此配置文件启用了高级别的追踪和访问日志,因此不适合进行性能测试。
– minimal:
与默认配置文件相同,但只安装了控制平面组件。
它允许您使用 Separate Profile 配置控制平面和数据平面组件(例如 Gateway)。
– remote:
配置 Multicluster Mesh 的 Remote Cluster。
– empty:
不部署任何东西。可以作为自定义配置的基本配置文件。
– preview:
预览文件包含的功能都是实验性。这是为了探索 Istio 的新功能。不确保稳定性、安全性和性能(使用风险需自负)。
参考链接:
https://istio.io/v1.17/zh/docs/setup/additional-setup/config-profiles/
https://istio.io/v1.17/zh/docs/setup/getting-started/#download
[root@master231 istio]# istioctl install –set profile=demo -y
3.7 查看拉取的镜像是否成功
root@master231:~# kubectl -n istio-system get pods
3.8 添加istioctl客户端自动补全功能
[root@master231 istio]# ll istio-1.17.8/tools/istioctl.bash
-rw-r–r– 1 root root 11294 Oct 11 2023 istio-1.17.8/tools/istioctl.bash
[root@master231 istio]#
[root@master231 istio]# source istio-1.17.8/tools/istioctl.bash
[root@master231 istio]#
[root@master231 istio]# istioctl
admin (Manage control plane (istiod) configuration)
analyze (Analyze Istio configuration and print validation messages)
authz ((authz is experimental. Use `istioctl experimental authz`))
bug-report (Cluster information and log capture support tool.)
…
3.9 查看istio的版本号
[root@master231 istio]# istioctl version
client version: 1.17.8
control plane version: 1.17.8
data plane version: 1.17.8 (2 proxies)
[root@master231 istio]#
4 istio实现A|B测试
4.1 编写资源清单
[root@master231 /server/kubernetes/istio/case-demo]# cat 01-deploy-apps.yaml
apiVersion: v1
kind: Namespace
metadata:
name: yangsenlin
—
apiVersion: apps/v1
# 注意,创建pod建议使用deploy资源,不要使用rc资源,否则istioctl可能无法手动注入。
kind: Deployment
metadata:
name: apps-v1
namespace: yangsenlin
spec:
replicas: 1
selector:
matchLabels:
app: xiuxian01
version: v1
auther: yangsenlin
template:
metadata:
labels:
app: xiuxian01
version: v1
auther: yangsenlin
spec:
volumes:
– name: data
emptyDir: {}
initContainers:
– name: init
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
volumeMounts:
– name: data
mountPath: /data
command:
– /bin/sh
– -c
– echo c1 > /data/index.html
containers:
– name: c1
ports:
– containerPort: 80
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: apps-v2
namespace: yangsenlin
spec:
replicas: 1
selector:
matchLabels:
app: xiuxian02
version: v2
auther: yangsenlin
template:
metadata:
labels:
app: xiuxian02
version: v2
auther: yangsenlin
spec:
volumes:
– name: data
emptyDir: {}
initContainers:
– name: init
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /data
command:
– /bin/sh
– -c
– echo c2 > /data/index.html
containers:
– name: c2
ports:
– containerPort: 80
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 case-demo]# cat 02-svc-apps.yaml
apiVersion: v1
kind: Service
metadata:
name: apps-svc-v1
namespace: yangsenlin
spec:
selector:
version: v1
ports:
– protocol: TCP
port: 80
targetPort: 80
name: http
—
apiVersion: v1
kind: Service
metadata:
name: apps-svc-v2
namespace: yangsenlin
spec:
selector:
version: v2
ports:
– protocol: TCP
port: 80
targetPort: 80
name: http
—
apiVersion: v1
kind: Service
metadata:
name: apps-svc-all
namespace: yangsenlin
spec:
selector:
auther: yangsenlin
ports:
– protocol: TCP
port: 80
targetPort: 80
name: http
[root@master231 case-demo]# cat 03-deploy-client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apps-client
namespace: yangsenlin
spec:
replicas: 1
selector:
matchLabels:
app: client-test
template:
metadata:
labels:
app: client-test
spec:
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
command:
– tail
– -f
– /etc/hosts
[root@master231 case-demo]# cat 04-vs-apps-svc-all.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: apps-svc-all-vs
namespace: yangsenlin
spec:
hosts:
– apps-svc-all
http:
# 定义匹配规则
– match:
# 基于header信息匹配将其进行路由,header信息自定义即可。
– headers:
# 匹配用户名包含”forestysl”的用户,这个KEY是咱们自定义的。
yangsenlin-username:
# “eaxct”关键词是包含,也可以使用”prefix”进行前缀匹配。
exact: forestysl
route:
– destination:
host: apps-svc-v1
– route:
– destination:
host: apps-svc-v2
4.2 手动注入
istioctl kube-inject -f 03-deploy-client.yaml | kubectl -n yangsenlin apply -f –
istioctl kube-inject -f 01-deploy-apps.yaml | kubectl -n yangsenlin apply -f –
kubectl get all -n yangsenlin
4.3 开始测试
[root@master231 /server/kubernetes/istio/case-demo]# kubectl get pods -n yangsenlin
NAME READY STATUS RESTARTS AGE
apps-client-5f579696d5-5n8zh 2/2 Running 0 3m39s
apps-v1-867845f5f9-wrkgv 2/2 Running 0 3m39s
apps-v2-7dfbc7c579-8nvsg 2/2 Running 0 3m39s
[root@master231 /server/kubernetes/istio/case-demo]#
[root@master241 yangsenlin]# kubectl -n yangsenlin exec -it apps-client-5cc67d864-g2r2v — sh
/ #
/ # while true; do curl -H “yangsenlin-username:forestysl” http://apps-svc-all;sleep 0.1;done # 添加用户认证的header信息
c1
c1
c1
c1
c1
c1
c1
…
/ # while true; do curl http://apps-svc-all;sleep 0.1;done # 不添加用户认证
c2
c2
c2
c2
c2
c2
c2
…
31 存储卷
常见的存储卷有
emptyDir
hostPath
nfs
configMap
secrets
downloadAPI
projected
pv
pvc
sc
local sc
1 emptyDir
1.1 emptyDir概述
emptyDir表示“空目录”(临时目录)存储卷,可以对容器的指定路径做数据持久化。
1.2 特点
随着pod生命周期的结束而结束
应用场景
1 实现同一个pod内不同容器的数据共享;
2 对数据做临时存储
1.3 对容器的指定目录做数据持久化
root@master231:/manifests# mkdir volumes
root@master231:/manifests# cd volumes/
root@master231:/manifests/volumes#
root@master231:/manifests/volumes# cat 01-deploy-emptyDir.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-emptydir
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
# 定义存储卷
volumes:
# 定义存储类型是一个”空目录”(emptyDir)
– emptyDir: {}
# 为存储卷起名字
name: data
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
# 定义存储卷挂载
volumeMounts:
# 需要要挂载的存储卷
– name: data
# 挂载到容器的指定路径,如果路径有文件,则会将文件的内容全部清空
mountPath: /usr/share/nginx/html
1.4 查看emptyDir数据的存储路径
/var/lib/kubelet/pods/
1.5 master节点写入测试数据,worker节点上查看
root@worker233:~# ll /var/lib/kubelet/pods/5a653aea-3acb-4525-8707-78dfc1024cd0/volumes/kubernetes.io~empty-dir/data/
total 8
drwxrwxrwx 2 root root 4096 Apr 10 20:59 ./
drwxr-xr-x 3 root root 4096 Apr 10 20:59 ../
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-emptydir-745ffd54fc-d99ng — sh
/ # echo 111111111111111111 > /usr/share/nginx/html/index.html
/ # exit
root@master231:/manifests/volumes#
root@worker233:~# docker ps -a | grep deploy-xiuxian-emptydir-745ffd54fc-d99ng
fa31078b4b9f f28fd43be4ad “/docker-entrypoint.…” 46 seconds ago Up 45 seconds k8s_c1_deploy-xiuxian-emptydir-745ffd54fc-d99ng_default_5a653aea-3acb-4525-8707-78dfc1024cd0_0
bbd6aaefef3c registry.aliyuncs.com/google_containers/pause:3.6 “/pause” 47 seconds ago Up 47 seconds k8s_POD_deploy-xiuxian-emptydir-745ffd54fc-d99ng_default_5a653aea-3acb-4525-8707-78dfc1024cd0_0
root@worker233:~# ll /var/lib/kubelet/pods/5a653aea-3acb-4525-8707-78dfc1024cd0/volumes/kubernetes.io~empty-dir/data/
total 12
drwxrwxrwx 2 root root 4096 Apr 10 21:05 ./
drwxr-xr-x 3 root root 4096 Apr 10 20:59 ../
-rw-r–r– 1 root root 19 Apr 10 21:05 index.html
root@worker233:~# docker ps -a | grep deploy-xiuxian-emptydir-745ffd54fc-2644m
1d4664f3c397 f28fd43be4ad “/docker-entrypoint.…” 9 minutes ago Up 9 minutes k8s_c1_deploy-xiuxian-emptydir-745ffd54fc-2644m_default_071e2d24-955e-4218-bb03-1635eeee8388_0
cb5aa2df92e8 registry.aliyuncs.com/google_containers/pause:3.6 “/pause” 9 minutes ago Up 9 minutes k8s_POD_deploy-xiuxian-emptydir-745ffd54fc-2644m_default_071e2d24-955e-4218-bb03-1635eeee8388_0
root@worker233:~# ll /var/lib/kubelet/pods/071e2d24-955e-4218-bb03-1635eeee8388/volumes/kubernetes.io~empty-dir/data/
total 8
drwxrwxrwx 2 root root 4096 Apr 10 20:59 ./
drwxr-xr-x 3 root root 4096 Apr 10 20:59 ../
root@worker232:~# docker ps -a | grep deploy-xiuxian-emptydir-745ffd54fc-22xpb
1416ba87b6c1 f28fd43be4ad “/docker-entrypoint.…” 11 minutes ago Up 11 minutes k8s_c1_deploy-xiuxian-emptydir-745ffd54fc-22xpb_default_f0d37fdb-4364-4f43-9668-cfe1309dc359_0
b3e3d65ffeb9 registry.aliyuncs.com/google_containers/pause:3.6 “/pause” 11 minutes ago Up 11 minutes k8s_POD_deploy-xiuxian-emptydir-745ffd54fc-22xpb_default_f0d37fdb-4364-4f43-9668-cfe1309dc359_0
root@worker232:~# ll /var/lib/kubelet/pods/f0d37fdb-4364-4f43-9668-cfe1309dc359/volumes/kubernetes.io~empty-dir/data/
total 8
drwxrwxrwx 2 root root 4096 Apr 10 20:59 ./
drwxr-xr-x 3 root root 4096 Apr 10 20:59 ../
root@worker232:~#
1.6 持久化总结
只有在pod中写入数据的容器才会改变,没有写入数据的pod容器中无变化
1.7 实现同一个pod内不同容器的数据共享
root@master231:/manifests/volumes# cat 02-deploy-emptyDir.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-emptydir-multiple
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– emptyDir: {}
name: data
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
– name: c2
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command:
– /bin/sh
– -c
– echo www.yangsenlin.top >> /ysl/index.html; tail -f /etc/hosts
volumeMounts:
– name: data
mountPath: /ysl
root@master231:/manifests/volumes# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-emptydir-multiple-67cb889f5d-9q894 2/2 Running 0 19s 10.100.2.97 worker233
deploy-xiuxian-emptydir-multiple-67cb889f5d-rwc2p 2/2 Running 0 19s 10.100.2.98 worker233
deploy-xiuxian-emptydir-multiple-67cb889f5d-xvqxh 2/2 Running 0 19s 10.100.1.36 worker232
root@master231:/manifests/volumes#
1.8 测试
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-emptydir-multiple-67cb889f5d-xvqxh -c c1 — sh
/ # ls -l /usr/share/nginx/html/
total 4
-rw-r–r– 1 root root 19 Apr 10 13:17 index.html
/ # cat /usr/share/nginx/html/index.html
www.yangsenlin.top
/ #
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-emptydir-multiple-67cb889f5d-xvqxh -c c2 — sh
/ # ls -l /ysl/index.html
-rw-r–r– 1 root root 19 Apr 10 13:17 /ysl/index.html
/ # cat /ysl/index.html
www.yangsenlin.top
/ #
2 hostPath
2.1 概述
hostPath用于pod内容器访问worker节点宿主机任意路径
2.2 应用场景
1 将某个worker节点的宿主机数据共享给pod的容器指定路径
2 同步时区
2.3 部署应用
root@master231:/manifests/volumes# cat 03-deploy-hostPath.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-hostpath
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– emptyDir: {}
name: data01
# 声明存储卷的名称是hostPath
– hostPath:
# 将worker节点宿主机路径暴露给容器,如果目录不存在会自动创建。
path: /ysl
name: data02
– name: data03
hostPath:
path: /etc/localtime
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
volumeMounts:
– name: data01
mountPath: /usr/share/nginx/html
– name: data02
mountPath: /ysl-container
– name: data03
mountPath: /etc/localtime
root@master231:/manifests/volumes# kubectl apply -f 03-deploy-hostPath.yaml
2.4 worker233节点准备测试数据
root@worker233:~# echo ceshi > /ysl/index.html
2.5 进入属于worker233节点的pod查看和worker232节点的pod作对比
root@master231:/manifests/volumes# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-hostpath-74db47bd86-kdqld 1/1 Running 0 11m 10.100.2.101 worker233
deploy-xiuxian-hostpath-74db47bd86-nh7hb 1/1 Running 0 11m 10.100.1.38 worker232
deploy-xiuxian-hostpath-74db47bd86-njvnm 1/1 Running 0 11m 10.100.2.102 worker233
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-hostpath-74db47bd86-kdqld — sh
/ # ls -l /ysl-container/index.html
-rw-r–r– 1 root root 6 Apr 10 21:47 /ysl-container/index.html
/ # cat /ysl-container/index.html
ceshi
/ # exit
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-hostpath-74db47bd86-nh7hb — sh
/ # ls -l /ysl-container/
total 0
/ #
2.6 总结
hostPath不能跨节点的pod数据共享,nfs就可以解决
3 nfs
3.1 概述
nfs表示网络文件系统,存在客户端和服务端,需要单独部署服务端
k8s在使用nfs时,集群应该安装nfs的相关模块(每个节点均需安装)
nfs的应用场景
1 实现跨节点不同pod的数据共享
2 实现跨节点存储数据
3.2 部署nfs服务器
所有节点安装nfs程序
apt -y install nfs-kernel-server
3.3 服务端配置nfs
root@master231:/manifests/volumes# mkdir -pv /software/data/nfs-server
root@master231:/manifests/volumes# tail -1 /etc/exports
/software/data/nfs-server *(rw,no_root_squash)
root@master231:/manifests/volumes#
root@master231:/manifests/volumes# systemctl restart nfs-server.service
root@master231:/manifests/volumes# exportfs
/software/data/nfs-server
3.4 应用部署
root@master231:/manifests/volumes# cat 04-deploy-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-nfs
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– emptyDir: {}
name: data01
– name: data02
hostPath:
path: /etc/localtime
– name: data03
# 表示存储卷的类型是nfs
nfs:
# 指定nfs的服务器
server: 10.0.0.231
# 指定nfs共享的数据路径
path: /software/data/nfs-server
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
volumeMounts:
– name: data01
mountPath: /usr/share/nginx/html
– name: data02
mountPath: /etc/localtime
– name: data03
mountPath: /nfsdata
root@master231:/manifests/volumes# kubectl apply -f 04-deploy-nfs.yaml
deployment.apps/deploy-xiuxian-nfs created
root@master231:/manifests/volumes# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-nfs-66c674bccb-7dh2w 1/1 Running 0 29s 10.100.2.104 worker233
deploy-xiuxian-nfs-66c674bccb-7sjst 1/1 Running 0 29s 10.100.1.39 worker232
deploy-xiuxian-nfs-66c674bccb-g6d72 1/1 Running 0 29s 10.100.2.103 worker233
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-nfs-66c674bccb-7dh2w — sh/ # echo www.yangsenlin.top > /nfsdata/index.html
/ # exit
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-nfs-66c674bccb-7sjst — sh/ # cat /nfsdata/index.html
www.yangsenlin.top
/ # exit
root@master231:/manifests/volumes# kubectl exec -it deploy-xiuxian-nfs-66c674bccb-g6d72 — sh
/ # cat /nfsdata/index.html
www.yangsenlin.top
/ # exit
root@master231:/manifests/volumes# cat /software/data/nfs-server/index.html
www.yangsenlin.top
root@master231:/manifests/volumes#
4 存储卷综合案例测试
root@master231:/manifests/volumes# cat 05-deploy-multiple.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-multiple
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
initContainers:
– name: init01
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
command:
– cp
args:
– /haha/index.html
– /xixi/
volumeMounts:
– name: data01
mountPath: /xixi
– name: data03
mountPath: /haha
volumes:
– emptyDir: {}
name: data01
– name: data02
hostPath:
path: /etc/localtime
– name: data03
nfs:
server: 10.0.0.231
path: /software/data/nfs-server
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
volumeMounts:
– name: data01
mountPath: /usr/share/nginx/html
– name: data02
mountPath: /etc/localtime
– name: data03
mountPath: /nfsdata
root@master231:/manifests/volumes#
4 configMap
4.1 什么是configMap
configMap的本质就是将配置信息基于键值对的方式进行映射的一种手段,数据存储在etcd
configMap将你的环境配置信息和容器镜像解耦,便于应用配置的修改
4.2 声明式API创建cm资源
root@master231:/manifests/volumes# kubectl explain configMap.data
root@master231:~/manifests/configmaps# cat 01-cm.demo
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
# 定义配置信息
data:
# 类属性键;每一个键都映射到一个简单的值
player_initial_lives: “3”
ui_properties_file_name: “user-interface.properties”
# 自定义键值对
xingming: zhangsan
city: Sichuan
# 类文件键
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
# 定义MySQL的配置信息
my.cnf: |
datadir=/data/mysql80
basedir=/softwares/mysql80
port=3306
socket=/tmp/mysql80.sock
root@master231:~/manifests/configmaps#
root@master231:/manifests/volumes# kubectl apply -f 01-cm.demo
configmap/game-demo created
root@master231:/manifests/volumes# kubectl get cm
NAME DATA AGE
game-demo 7 8s
kube-root-ca.crt 1 7d14h
root@master231:/manifests/volumes# kubectl get -f 01-cm.demo
NAME DATA AGE
game-demo 7 24s
root@master231:/manifests/volumes# kubectl get cm game-demo
NAME DATA AGE
game-demo 7 38s
4.3 响应式创建cm
root@master231:~/manifests/configmaps# kubectl create configmap test-xy –from-file=myhosts=/etc/hosts –from-literal=username=admin –from-literal=password=admin123 –from-literal=myos=/etc/os-release
root@master231:~/manifests/configmaps# kubectl get cm
NAME DATA AGE
game-demo 7 24h
kube-root-ca.crt 1 8d
test-xy 4 44s
root@master231:~/manifests/configmaps# kubectl get cm test-xy
NAME DATA AGE
test-xy 4 54s
root@master231:~/manifests/configmaps# kubectl describe cm test-xy
root@master231:~/manifests/configmaps# kubectl get cm test-xy -o yaml
4.4 基于环境变量引用cm资源
root@master231:~/manifests/volumes# cat 07-deploy-cm-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-cm-env
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
env:
– name: test-xingming
# 值从哪里引用
valueFrom:
# 表示值从一个cm类型引用
configMapKeyRef:
# 指定引用cm资源的名称
name: game-demo
# 引用cm的某个key
key: xingming
– name: test-mysql
valueFrom:
configMapKeyRef:
name: game-demo
key: my.cnf
root@master231:~/manifests/volumes# kubectl apply -f 07-deploy-cm-env.yaml
root@master231:~/manifests/volumes# kubectl exec -it deploy-cm-env-6987ccb879-ddlbk — env|grep test
test-xingming=zhangsan
test-mysql=datadir=/data/mysql80
4.5 pod基于存储卷引用cm资源
root@master231:~/manifests/volumes# cat 08-deploy-cm-volume.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-cm-volume
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– name: data
# 表示存储卷的类型是cm
configMap:
# 指定cm的名称
name: game-demo
# 引用cm的具体key,若不定义,则会将cm的所有key全部引用,将每个key作为文件名称。
# 如果定义,可以引用具体的字段名称,且指定文件名称。
items:
# 引用cm资源具体的KEY
– key: game.properties
# 指定将来挂载时的文件名称
path: test-game.properties
– key: my.cnf
path: ysl-my.cnf
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
volumeMounts:
– name: data
mountPath: /ysl
root@master231:~/manifests/volumes# kubectl apply -f 08-deploy-cm-volume.yaml
root@master231:~/manifests/volumes# kubectl exec -it deploy-cm-volume-b645c76cc-h9pfm — ls -l /ysl
total 0
lrwxrwxrwx 1 root root 27 Apr 13 03:01 test-game.properties -> ..data/test-game.properties
lrwxrwxrwx 1 root root 17 Apr 13 03:01 ysl-my.cnf -> ..data/ysl-my.cnf
4.6 cm资源映射nginx配置文件subPath
root@master231:~/manifests/volumes# cat 09-deploy-cm-nginx-subPath.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-subfile
data:
nginx.conf: |
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
default.conf: |
server {
listen 81;
listen [::]:81;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-cm-nginx-subpath
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– name: data01
configMap:
name: nginx-subfile
items:
– key: default.conf
path: default.conf
– name: data02
configMap:
name: nginx-subfile
items:
– key: nginx.conf
path: nginx.conf
– name: data03
hostPath:
path: /etc/localtime
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
#command: [“tail”,”-f”,”/etc/hosts”]
volumeMounts:
– name: data01
mountPath: /etc/nginx/conf.d/
– name: data02
mountPath: /etc/nginx/nginx.conf
# 当subPath的值和存储卷的path相同时,mountPath挂载点将是一个文件,而非目录。
subPath: nginx.conf
– name: data03
mountPath: /etc/localtime
root@master231:~/manifests/volumes# kubectl apply -f 09-deploy-cm-nginx-subPath.yaml
configmap/nginx-subfile created
deployment.apps/deploy-cm-nginx-subpath created
root@master231:~/manifests/volumes#
root@master231:~/manifests/volumes# kubectl exec -it deploy-cm-nginx-subpath-6f7cc6446b-66jvf — ls -l /etc/nginx;date -R
total 32
drwxrwxrwx 3 root root 4096 Apr 13 11:09 conf.d
-rw-r–r– 1 root root 1077 May 25 2021 fastcgi.conf
-rw-r–r– 1 root root 1007 May 25 2021 fastcgi_params
-rw-r–r– 1 root root 5231 May 25 2021 mime.types
lrwxrwxrwx 1 root root 22 Nov 13 2021 modules -> /usr/lib/nginx/modules
-rw-r–r– 1 root root 597 Apr 13 11:09 nginx.conf
-rw-r–r– 1 root root 636 May 25 2021 scgi_params
-rw-r–r– 1 root root 664 May 25 2021 uwsgi_params
Sun, 13 Apr 2023 11:10:49 +0800
5 secrets
5.1 概述
和cm作用类似,但secrets是k8s用于存储敏感数据(密码、令牌或密钥的对象)的资源
会将数据以base64进行编码,将敏感数据单独存储这样让数据泄露的风险会减小。
官网链接:
https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/
https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/#secret-types
5.2 声明式创建创建secret的两种方式
5.2.1 基于stringData
root@master231:~/manifests/secrets# kubectl apply -f 01-secrets-demo.yaml
secret/test-mysql-conn created
root@master231:~/manifests/secrets# kubectl get -f 01-secrets-demo.yaml
NAME TYPE DATA AGE
test-mysql-conn Opaque 3 11s
root@master231:~/manifests/secrets# kubectl describe -f 01-secrets-demo.yaml
Name: test-mysql-conn
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
my.cnf: 92 bytes
password: 3 bytes
username: 5 bytes
root@master231:~/manifests/secrets# cat 01-secrets-demo.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-mysql-conn
# 创建secret时会自动将value的值自动编码
stringData:
username: admin
password: ysl
my.cnf: |
datadir=/ysl/data/mysql80
basedir=/ysl/softwares/mysql80
port=3306
socket=/tmp/mysql80.sock
root@master231:~/manifests/secrets# kubectl get secrets test-mysql-conn -o yaml
apiVersion: v1
data:
my.cnf: ZGF0YWRpcj0veXNsL2RhdGEvbXlzcWw4MApiYXNlZGlyPS95c2wvc29mdHdhcmVzL215c3FsODAKcG9ydD0zMzA2CnNvY2tldD0vdG1wL215c3FsODAuc29jawo=
password: eXNs
username: YWRtaW4=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:”v1″,”kind”:”Secret”,”metadata”:{“annotations”:{},”name”:”test-mysql-conn”,”namespace”:”default”},”stringData”:{“my.cnf”:”datadir=/ysl/data/mysql80\nbasedir=/ysl/softwares/mysql80\nport=3306\nsocket=/tmp/mysql80.sock\n”,”password”:”ysl”,”username”:”admin”}}
creationTimestamp: “2025-04-13T03:18:26Z”
name: test-mysql-conn
namespace: default
resourceVersion: “476167”
uid: b1d5bdc1-31a9-4d41-adb5-6a660fccd588
type: Opaque
5.2.2 将编码的文件解码
base64解码
echo -n base64编码 | base64 -d
base64编码
echo 需要编码的文件 | base64
5.2.3 基于data(反推yaml)
root@master231:~/manifests/secrets# kubectl get secrets test-mysql-conn -o yaml > 02-secrets-Data-demo.yaml
root@master231:~/manifests/secrets# kubectl apply -f 02-secrets-Data-demo.yaml
5.3 响应式创建secret
root@master231:~/manifests/secrets# kubectl create secret generic test-secrets –from-file=myhost=/etc/hosts –from-file=myos=/etc/os-release –from-literal=xingming=ysl –from-literal=class=ysl
secret/test-secrets created
root@master231:~/manifests/secrets# kubectl get secrets test-secrets
NAME TYPE DATA AGE
test-secrets Opaque 4 28s
root@master231:~/manifests/secrets# kubectl describe secrets test-secrets
5.4 pod基于环境变量应用secret
root@master231:~/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-secret-env-5dfb865b94-468t7 1/1 Running 0 3s
deploy-secret-env-5dfb865b94-mmztq 1/1 Running 0 3s
deploy-secret-env-5dfb865b94-vnpr2 1/1 Running 0 3s
root@master231:~/manifests/volumes# kubectl exec -it deploy-secret-env-5dfb865b94-468t7 — env | grep test
test-username=admin
test-password=ysl
test-class=datadir=/ysl/data/mysql80
root@master231:~/manifests/volumes# cat 10-deploy-secrets-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-secret-env
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
env:
– name: test-username
valueFrom:
# 表示值从一个secret类型引用
secretKeyRef:
# 指定引用secret资源的名称
name: test-mysql-conn
# 引用secret的某个key
key: username
– name: test-password
valueFrom:
secretKeyRef:
name: test-mysql-conn
key: password
– name: test-class
valueFrom:
secretKeyRef:
name: test-mysql-conn
key: my.cnf
root@master231:~/manifests/volumes#
5.5 基于存储卷应用secret
root@master231:~/manifests/volumes# cat 11-deploy-secrets-volume.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-secret-volume
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– name: data
# 表示存储卷的类型是secret
secret:
# 指定secret的名称
secretName: test-mysql-conn
items:
– key: username
path: username.txt
– key: password
path: pwd.txt
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
volumeMounts:
– name: data
mountPath: /ysl
root@master231:~/manifests/volumes# kubectl apply -f 11-deploy-secrets-volume.yaml
deployment.apps/deploy-secret-volume created
root@master231:~/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-secret-volume-6d5f7df59-55fmg 1/1 Running 0 9s
deploy-secret-volume-6d5f7df59-7q4df 1/1 Running 0 9s
deploy-secret-volume-6d5f7df59-lkmcz 1/1 Running 0 9s
root@master231:~/manifests/volumes# kubectl exec -it deploy-secret-volume-6d5f7df59-55fmg —
cat /ysl/username.txt;cat /ysl/pwd.txt
root@master231:~/manifests/volumes# kubectl exec -it deploy-secret-volume-6d5f7df59-55fmg — sh
/ # ls /ysl
pwd.txt username.txt
/ # cat /ysl/pwd.txt
/ # cat /ysl/pwd.txt -n
1 ysl
/ # exit
6 拉取harbor私有镜像
6.1 推送镜像到harbor
root@worker233:~# docker tag crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:nginx-1.24 harbor.ysl.com/ysl-xiuxian/apps:nginx-1.24
root@worker233:~# docker push harbor.ysl.com/ysl-xiuxian/apps:nginx-1.24
6.2 将harbor仓库设置为私有
在配置管理里面将“公开”勾选叉掉并保存
6.3 创建harbor的普通用户并授权项目
略
6.4 响应式创建secrets
root@master231:~/manifests/volumes# kubectl create secret docker-registry ysl-xiuxian –docker-username=ceshi –docker-password=XZnh@95599 –docker-email=ceshi@qq.com –docker-server=harbor.ysl.com
secret/ysl-xiuxian created
root@master231:~/manifests/volumes#
root@master231:~/manifests/volumes# kubectl get secret/ysl-xiuxian
NAME TYPE DATA AGE
ysl-xiuxian kubernetes.io/dockerconfigjson 1 48s
root@master231:~/manifests/volumes# kubectl describe secret/ysl-xiuxian
root@master231:~/manifests/volumes# kubectl get secret/ysl-xiuxian -o yaml
6.5 拉取镜像时指定secret
root@master231:~/manifests/volumes# cat 12-deploy-secrets-harbor.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-secret-harbor
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
# 指定拉取私有仓库的镜像仓库的secret认证信息
imagePullSecrets:
– name: ysl-xiuxian
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v2
root@master231:~/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-secret-harbor-57b7f55bb9-gdl7g 1/1 Running 0 5s
deploy-secret-harbor-57b7f55bb9-hkrhz 1/1 Running 0 5s
deploy-secret-harbor-57b7f55bb9-k4z7w 1/1 Running 0 5s
6.6 容器拉取镜像策略
Always
1 如果本地没有镜像,则会去远程仓库拉取镜像
2 如果本地有镜像,则会对比本地的镜像摘要信息和远程仓库的摘要信息对比,相同则不拉取,不同则拉取
Never
1 不管本地有没有镜像,都不会拉取
IfNotPresent
1 本地没有镜像,则会去远程仓库拉取镜像
2 如果本地没有镜像,则尝试启动容器
温馨提示:
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
如果镜像标签的tag为”:latest”时,默认的镜像策略为”Always”,否则默认镜像拉取策略为”IfNotPresent”。
7 servieaccounts
7.1 概述
serviceaccounts简称sa,表示服务账号,其作用就是用于身份验证。
创建一个sa是会自动关联一个secrets资源
为pod内的服务提供服务账号,可以基于该账号进行认证操作
7.2 响应式创建sa
root@master231:~/manifests/volumes# kubectl create serviceaccount ysl
serviceaccount/ysl created
root@master231:~/manifests/volumes# kubectl get sa
NAME SECRETS AGE
default 1 8d
ysl 1 50s
root@master231:~/manifests/volumes# kubectl get sa ysl -o yaml
…
secrets:
– name: ysl-token-wl69r
root@master231:~/manifests/volumes# kubectl get secrets ysl-token-wl69r
NAME TYPE DATA AGE
ysl-token-wl69r kubernetes.io/service-account-token 3 2m2s
root@master231:~/manifests/volumes#
7.3 声明式创建服务账号
root@master231:~/manifests/serviceaccounts# kubectl delete -f 01-sa.yaml
serviceaccount “ysl” deleted
root@master231:~/manifests/serviceaccounts# kubectl apply -f 01-sa.yaml
serviceaccount/ysl created
root@master231:~/manifests/serviceaccounts# kubectl get -f 01-sa.yaml
NAME SECRETS AGE
ysl 2 7s
root@master231:~/manifests/serviceaccounts#
7.4 基于sa引用secret实现镜像拉取
root@master231:~/manifests/volumes# cat 13-deploy-imagePullPolicy-serviceAccountName.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-ysl
type: kubernetes.io/dockerconfigjson
stringData:
# 解码:
# echo bGludXg5NTpMaW51eDk1QDIwMjU= | base64 -d
# 编码:
# echo -n ysl:ysl@2025 | base64
.dockerconfigjson: ‘{“auths”:{“harbor.ysl.com”:{“username”:”ceshi”,”password”:”XZnh@95599″,”email”:”ceshi@qq.com”,”auth”:”Y2VzaGkK”}}}’
—
apiVersion: v1
kind: ServiceAccount
metadata:
name: ysl
imagePullSecrets:
– name: harbor-ysl
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-secret-harbor
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
#imagePullSecrets:
#- name: harbor-ysl
# 也可以不适用”imagePullSecrets”,而是基于sa进行认证
serviceAccountName: ysl
containers:
– name: c1
image: harbor.ysl.com/test-xiuxian/test:v2
# imagePullPolicy: Never
# imagePullPolicy: IfNotPresent
imagePullPolicy: Always
root@master231:~/manifests/volumes# kubectl apply -f 13-deploy-imagePullPolicy-serviceAccountName.yaml
[root@master231 volumes]# kubectl get pods -o wide
8 valueFrom之fieldRef及resourceFieldRef
fieldRef有效值:
– metadata.name
– metadata.namespace,
– `metadata.labels[‘
– `metadata.annotations[‘
– spec.nodeName
– spec.serviceAccountName
– status.hostIP
– status.podIP
– status.podIPs
resourceFieldRef有效值:
– limits.cpu
– limits.memory
– limits.ephemeral-storage
– requests.cpu
– requests.memory
– requests.ephemeral-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-valuefrom
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v2
resources:
requests:
cpu: 0.2
memory: 200Mi
limits:
cpu: 0.5
memory: 500Mi
imagePullPolicy: Always
env:
– name: test-PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: test-IP
valueFrom:
fieldRef:
fieldPath: status.podIP
– name: test-REQUESTS
valueFrom:
resourceFieldRef:
resource: requests.cpu
– name: test-LIMITS
valueFrom:
resourceFieldRef:
resource: limits.memory
[root@master231 volumes]# kubectl apply -f 14-deploy-valueFrom.yaml
deployment.apps/deploy-valuefrom created
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
[root@master231 volumes]# kubectl exec deploy-valuefrom-7f48549b-5v4rn — env
…
test-LIMITS=524288000
test-PODNAME=deploy-valuefrom-7f48549b-5v4rn
test-IP=10.100.2.182
# 很明显,对于requests字段并没有抓到0.2,而是”向上取整”。
test-REQUESTS=1
…
9 downloadAPI
9.1 概述
与ConfigMap和Secret不同,DownwardAPI自身并非一种独立的API资源类型。
DownwardAPI只是一种将Pod的metadata、spec或status中的字段值注入到其内部Container里的方式。
DownwardAPI提供了两种方式用于将POD的信息注入到容器内部
– 环境变量:
用于单个变量,可以将POD信息和容器信息直接注入容器内部.
– Volume挂载:
将 POD 信息生成为文件,直接挂载到容器内部中去
9.2 应用部署测试
root@master231:~/manifests/volumes# kubectl apply -f 15-deploy-download-API.yaml
deployment.apps/downloadapi-demo created
root@master231:~/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
downloadapi-demo-684844f646-pdx9l 2/2 Running 0 4s
root@master231:~/manifests/volumes# cat 15-deploy-download-API.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: downloadapi-demo
spec:
replicas: 1
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
volumes:
– name: data01
downwardAPI:
items:
– path: pod-name
# 仅支持: annotations, labels, name and namespace。
fieldRef:
fieldPath: “metadata.name”
– name: data02
downwardAPI:
items:
– path: pod-ns
fieldRef:
fieldPath: “metadata.namespace”
– name: data03
downwardAPI:
items:
– path: containers-limists-memory
# 仅支持: limits.cpu, limits.memory, requests.cpu and requests.memory
resourceFieldRef:
containerName: c1
resource: “limits.memory”
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
resources:
requests:
cpu: 0.2
memory: 300Mi
limits:
cpu: 0.5
memory: 500Mi
volumeMounts:
– name: data01
mountPath: /xixi
– name: data02
mountPath: /haha
– name: data03
mountPath: /hehe
– name: c2
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
command:
– tail
args:
– -f
– /etc/hosts
resources:
limits:
cpu: 1.5
memory: 1.5Gi
root@master231:~/manifests/volumes#
root@master231:~/manifests/volumes# kubectl exec -it downloadapi-demo-684844f646-pdx9l -c c1 — sh
/ # ls /xixi/
pod-name
/ # ls /h
haha/ hehe/ home/
/ # ls /hehe/
containers-limists-memory
/ # ls /haha
pod-ns
/ #
10 projected
10.1 概述
Projected Volume是一种特殊的卷类型,它能够将已存在的多个卷投射进同一个挂载点目录中。
Projected Volume仅支持对如下四种类型的卷(数据源)进行投射操作,这类的卷一般都是用于为容器提供预先定义好的数据:
– Secret:
投射Secret 对象。
– ConfigMap:
投射ConfigMap对象。
– DownwardAPI:
投射Pod元数据。
– ServiceAccountToken:
投射ServiceAccount Token。
10.2 部署应用测试
root@master231:~/manifests/volumes# cat 16-deploy-projested-volumes.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ysl-cm
data:
blog: “https://www.cnblogs.com/yangsenlin”
k8s: “https://space.bilibili.com/600805398/channel/series”
—
apiVersion: v1
kind: Secret
metadata:
name: ysl-secrets
stringData:
username: admin
password: yangsenlin
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: projected-demo
spec:
replicas: 1
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
volumes:
– name: data01
projected:
sources:
– downwardAPI:
items:
– path: containers-limists-memory
resourceFieldRef:
containerName: c1
resource: “limits.memory”
– configMap:
name: ysl-cm
– secret:
name: ysl-secrets
– serviceAccountToken:
path: ysl-token
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
resources:
limits:
cpu: 0.5
memory: 500Mi
volumeMounts:
– name: data01
mountPath: /ysl-xixi
root@master231:~/manifests/volumes# kubectl apply -f 16-deploy-projested-volumes.yaml
configmap/ysl-cm unchanged
secret/ysl-secrets configured
deployment.apps/projected-demo created
root@master231:~/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
projected-demo-d5544c7d7-kpbxb 1/1 Running 0 3s
root@master231:~/manifests/volumes# kubectl exec -it projected-demo-d5544c7d7-kpbxb — sh
/ # ls /ysl-xixi
blog k8s username
containers-limists-memory password ysl-token
/ #
11 pv sc pvc
11.1 概述
– pv
pv用于和后端存储对接的资源,关联后端存储。
– sc
sc可以动态创建pv的资源,关联后端存储。
– pvc
可以向pv或者sc进行资源请求,获取特定的存储。
pod只需要在存储卷声明使用哪个pvc即可。
11.2 手动创建pv和pvc及pod引用
1 手动创建pv
root@master231:~/manifests/volumes# mkdir -pv /ysl/data/nfs-server/pv/linux/pv00{1,2,3}
root@master231:~/manifests/persistentvolumes# cat manual-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-linux-pv01
labels:
xingming: zhangsan
spec:
# 声明PV的访问模式,常用的有”ReadWriteOnce”,”ReadOnlyMany”和”ReadWriteMany”:
# ReadWriteOnce:(简称:”RWO”)
# 只允许单个worker节点读写存储卷,但是该节点的多个Pod是可以同时访问该存储卷的。
# ReadOnlyMany:(简称:”ROX”)
# 允许多个worker节点进行只读存储卷。
# ReadWriteMany:(简称:”RWX”)
# 允许多个worker节点进行读写存储卷。
# ReadWriteOncePod:(简称:”RWOP”)
# 该卷可以通过单个Pod以读写方式装入。
# 如果您想确保整个集群中只有一个pod可以读取或写入PVC,请使用ReadWriteOncePod访问模式。
# 这仅适用于CSI卷和Kubernetes版本1.22+。
accessModes:
– ReadWriteMany
# 声明存储卷的类型为nfs
nfs:
path: /ysl/data/nfs-server/pv/linux/pv001
server: 10.0.0.231
# 指定存储卷的回收策略,常用的有”Retain”和”Delete”
# Retain:
# “保留回收”策略允许手动回收资源。
# 删除PersistentVolumeClaim时,PersistentVolume仍然存在,并且该卷被视为”已释放”。
# 在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
# Delete:
# 对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。
# Recycle:
# 对于”回收利用”策略官方已弃用。相反,推荐的方法是使用动态资源调配。
# 如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
persistentVolumeReclaimPolicy: Retain
# 声明存储的容量
capacity:
storage: 2Gi
—
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-linux-pv02
labels:
xingming: zhangsan
spec:
accessModes:
– ReadWriteMany
nfs:
path: /ysl/data/nfs-server/pv/linux/pv002
server: 10.0.0.231
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 5Gi
—
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-linux-pv03
labels:
xingming: zhangsan
spec:
accessModes:
– ReadWriteMany
nfs:
path: /ysl/data/nfs-server/pv/linux/pv003
server: 10.0.0.231
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 10Gi
root@master231:~/manifests/persistentvolumes# kubectl apply -f manual-pv.yaml
persistentvolume/test-linux-pv01 created
persistentvolume/test-linux-pv02 created
persistentvolume/test-linux-pv03 created
root@master231:~/manifests/persistentvolumes# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-linux-pv01 2Gi RWX Retain Available 4s
test-linux-pv02 5Gi RWX Retain Available 4s
test-linux-pv03 10Gi RWX Retain Available 4s
root@master231:~/manifests/persistentvolumes#
相关资源说明:
NAME :
pv的名称
CAPACITY :
pv的容量
ACCESS MODES:
pv的访问模式
RECLAIM POLICY:
pv的回收策略。
STATUS :
pv的状态。
CLAIM:
pv被哪个pvc使用。
STORAGECLASS
sc的名称。
REASON
pv出错时的原因。
AGE
创建的时间。
2 手动创建pvc
root@master231:~/manifests/persistentvolumes# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-linux-pvc Bound test-linux-pv02 5Gi RWX 7s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/test-linux-pv01 2Gi RWX Retain Available 2m34s
persistentvolume/test-linux-pv02 5Gi RWX Retain Bound default/test-linux-pvc 2m34s
persistentvolume/test-linux-pv03 10Gi RWX Retain Available 2m34s
root@master231:~/manifests/persistentvolumes#
root@master231:/manifests/volumes# cat 17-deploy-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-pvc-demo
spec:
replicas: 1
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
volumes:
– name: data
# 声明存储卷的类型是pvc
persistentVolumeClaim:
# 声明pvc的名称
claimName: test-linux-pvc
– name: dt
hostPath:
path: /etc/localtime
initContainers:
– name: init01
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
volumeMounts:
– name: data
mountPath: /ysl
– name: dt
mountPath: /etc/localtime
command:
– /bin/sh
– -c
– date -R >> /ysl/index.html ; echo www.ysl.com >> /ysl/index.html
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
– name: dt
mountPath: /etc/localtime
root@master231:/manifests/volumes# kubectl apply -f 17-deploy-pvc.yaml
deployment.apps/deploy-pvc-demo created
root@master231:/manifests/volumes#
4.验证Pod后端的存储数据
4.1 找到pvc的名称
[root@master231 volumes]# kubectl describe pod deploy-pvc-demo-688b57bdd-dlkzd
Name: deploy-pvc-demo-688b57bdd-dlkzd
Namespace: default
…
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-linux-pvc
…
4.2 基于pvc找到与之关联的pv
[root@master231 volumes]# kubectl get pvc test-linux-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-linux-pvc Bound test-linux-pv02 5Gi RWX 12m
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pv test-linux-pv02
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-linux-pv02 5Gi RWX Retain Bound default/test-linux-pvc 15m
[root@master231 volumes]#
4.3 查看pv的详细信息
[root@master231 volumes]# kubectl describe pv test-linux-pv02
Name: test-linux-pv02
Labels: school=ysl
…
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.0.0.231
Path: /ysl/data/nfs-server/pv/linux/pv002
ReadOnly: false
…
4.4 验证数据的内容
[root@master231 volumes]# ll /ysl/data/nfs-server/pv/linux/pv002
total 12
drwxr-xr-x 2 root root 4096 Feb 14 16:46 ./
drwxr-xr-x 5 root root 4096 Feb 14 16:36 ../
-rw-r–r– 1 root root 68 Feb 14 16:49 index.html
[root@master231 volumes]#
[root@master231 volumes]# cat /ysl/data/nfs-server/pv/linux/pv002/index.html
Fri, 14 Feb 2025 16:49:42 +0800
www.ysl.com
[root@master231 volumes]#
12 基于nfs4.9.0版本实现动态存储类应用部署
推荐阅读:
https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/install-csi-driver-v4.9.0.md
https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs
12.1 克隆代码
[root@master231 nfs]# git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
12.2 安装动态存储类
root@master231:~/csi-driver-nfs-4.9.0# ./deploy/install-driver.sh v4.9.0 local
use local deploy
Installing NFS CSI driver, version: v4.9.0 …
serviceaccount/csi-nfs-controller-sa created
serviceaccount/csi-nfs-node-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
12.3 验证安装
root@master231:~/images# kubectl -n kube-system get pods -o wide | grep csi
csi-nfs-controller-5c5c695fb-qxxp2 4/4 Running 0 4m31s 10.0.0.232 worker232
csi-nfs-node-6k297 3/3 Running 0 37m 10.0.0.231 master231
csi-nfs-node-sxjwx 3/3 Running 0 37m 10.0.0.232 worker232
csi-nfs-node-tjm4p 3/3 Running 0 4m13s 10.0.0.233 worker233
root@master231:~/images#
4.创建存储类
root@master231:~/csi-driver-nfs-4.9.0# mkdir -p /ysl/data/nfs-server/sc
root@master231:~/csi-driver-nfs-4.9.0# cat deploy/v4.9.0/storageclass.yaml
—
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.0.231
share: /ysl/data/nfs-server/sc
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: “mount-options”
# csi.storage.k8s.io/provisioner-secret-namespace: “default”
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
– nfsvers=4.1
root@master231:~/csi-driver-nfs-4.9.0# kubectl apply -f deploy/v4.9.0/storageclass.yaml
storageclass.storage.k8s.io/nfs-csi created
root@master231:~/csi-driver-nfs-4.9.0# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 7s
root@master231:~/csi-driver-nfs-4.9.0#
5.删除pv和pvc保证环境”干净”
root@master231:/manifests/persistentvolumes# kubectl delete -f .
persistentvolume “ysl-linux-pv01” deleted
persistentvolume “ysl-linux-pv02” deleted
persistentvolume “ysl-linux-pv03” deleted
persistentvolumeclaim “ysl-linux-pvc” deleted
root@master231:/manifests/persistentvolumes# kubectl get pv,pvc,pods
No resources found
6.创建pvc测试
[root@master231 persistentvolumeclaims]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 9m15s
[root@master231 persistentvolumeclaims]#
[root@master231 persistentvolumeclaims]# cat pvc-sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-linux-pvc
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 4Gi
requests:
storage: 3Gi
[root@master231 persistentvolumeclaims]#
[root@master231 persistentvolumeclaims]# kubectl apply -f pvc-sc.yaml
persistentvolumeclaim/test-linux-pvc created
[root@master231 persistentvolumeclaims]#
[root@master231 persistentvolumeclaims]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-linux-pvc Bound pvc-f47d1e5b-a2f1-463f-b06c-7940add76104 3Gi RWX nfs-csi 14s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-f47d1e5b-a2f1-463f-b06c-7940add76104 3Gi RWX Delete Bound default/test-linux-pvc nfs-csi 14s
[root@master231 persistentvolumeclaims]#
7.pod引用pvc
[root@master231 volumes]# kubectl apply -f 17-deploy-pvc.yaml
deployment.apps/deploy-pvc-demo created
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-pvc-demo-688b57bdd-td2z7 1/1 Running 0 4s 10.100.1.143 worker232
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-pvc-demo-688b57bdd-td2z7 1/1 Running 0 4s 10.100.1.143 worker232
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.1.143
Fri, 14 Feb 2025 17:27:25 +0800
www.ysl.com
[root@master231 volumes]#
8.验证pod的后端存储数据
[root@master231 volumes]# kubectl describe pod deploy-pvc-demo-688b57bdd-td2z7 | grep ClaimName
ClaimName: test-linux-pvc
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pvc test-linux-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-linux-pvc Bound pvc-b425481e-3b15-4854-bf34-801a29edfcc5 3Gi RWX nfs-csi 2m29s
[root@master231 volumes]#
[root@master231 volumes]# kubectl describe pv pvc-b425481e-3b15-4854-bf34-801a29edfcc5 | grep Source -A 5
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: nfs.csi.k8s.io
FSType:
VolumeHandle: 10.0.0.231#ysl/data/nfs-server/sc#pvc-b425481e-3b15-4854-bf34-801a29edfcc5##
ReadOnly: false
[root@master231 volumes]#
[root@master231 volumes]# ll /ysl/data/nfs-server/sc/pvc-b425481e-3b15-4854-bf34-801a29edfcc5/
total 12
drwxr-xr-x 2 root root 4096 Feb 14 17:27 ./
drwxr-xr-x 3 root root 4096 Feb 14 17:26 ../
-rw-r–r– 1 root root 50 Feb 14 17:27 index.html
[root@master231 volumes]#
[root@master231 volumes]# cat /ysl/data/nfs-server/sc/pvc-b425481e-3b15-4854-bf34-801a29edfcc5/index.html
Fri, 14 Feb 2025 17:27:25 +0800
www.ysl.com
[root@master231 volumes]#
– K8S配置默认的存储类及多个存储类定义
1.响应式配置默认存储类
[root@master231 nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 165m
[root@master231 nfs]#
[root@master231 nfs]# kubectl patch sc nfs-csi -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
storageclass.storage.k8s.io/nfs-csi patched
[root@master231 nfs]#
[root@master231 nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi (default) nfs.csi.k8s.io Delete Immediate false 166m
[root@master231 nfs]#
[root@master231 nfs]#
2.响应式取消默认存储类
[root@master231 nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi (default) nfs.csi.k8s.io Delete Immediate false 168m
[root@master231 nfs]#
[root@master231 nfs]# kubectl patch sc nfs-csi -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”false”}}}’
storageclass.storage.k8s.io/nfs-csi patched
[root@master231 nfs]#
[root@master231 nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 168m
[root@master231 nfs]#
3.声明式配置多个存储类
[root@master231 storageclasses]# cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-sc-xixi
annotations:
storageclass.kubernetes.io/is-default-class: “false”
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.0.231
share: /ysl/data/nfs-server/sc-xixi
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
– nfsvers=4.1
—
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-sc-haha
annotations:
storageclass.kubernetes.io/is-default-class: “true”
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.0.231
share: /ysl/data/nfs-server/sc-haha
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
– nfsvers=4.1
[root@master231 storageclasses]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 174m
[root@master231 storageclasses]#
[root@master231 storageclasses]# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/test-sc-xixi created
storageclass.storage.k8s.io/test-sc-haha created
[root@master231 storageclasses]#
[root@master231 storageclasses]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Delete Immediate false 174m
test-sc-haha (default) nfs.csi.k8s.io Delete Immediate false 2s
test-sc-xixi nfs.csi.k8s.io Delete Immediate false 2s
[root@master231 storageclasses]#
4.准备目录
[root@master231 storageclasses]# mkdir -pv /ysl/data/nfs-server/sc-{xixi,haha}
5.测试验证
[root@master231 persistentvolumeclaims]# cat manual-pvc-4.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-sc
spec:
accessModes:
– ReadWriteMany
resources:
limits:
storage: 4Gi
requests:
storage: 3Gi
[root@master231 persistentvolumeclaims]#
[root@master231 persistentvolumeclaims]# kubectl apply -f manual-pvc-4.yaml
persistentvolumeclaim/pvc-linux94 created
[root@master231 persistentvolumeclaims]#
[root@master231 persistentvolumeclaims]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-linux-pvc Bound pvc-f47d1e5b-a2f1-463f-b06c-7940add76104 3Gi RWX nfs-csi 173m
pvc-linux94 Bound pvc-af3b36cc-cf89-4476-895c-7eeef3946564 3Gi RWX test-sc-haha 24s
[root@master231 persistentvolumeclaims]#
6.pod引用pvc
[root@master231 deployments]# cat 17-deploy-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-pvc-default-sc
spec:
replicas: 1
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-sc
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 deployments]#
[root@master231 deployments]# kubectl apply -f 17-deploy-pvc.yaml
deployment.apps/deploy-xiuxian-pvc-default-sc created
[root@master231 deployments]#
[root@master231 deployments]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-xiuxian-pvc-default-sc-56c4948fb9-wjqtd 1/1 Running 0 3s 10.100.1.171 worker232
[root@master231 deployments]#
[root@master231 deployments]# kubectl exec -it deploy-xiuxian-pvc-default-sc-56c4948fb9-wjqtd — sh
/ # echo www.ysl.com > /usr/share/nginx/html/index.html
/ #
[root@master231 deployments]#
[root@master231 deployments]# curl 10.100.1.171
www.ysl.com
[root@master231 deployments]#
[root@master231 deployments]# kubectl describe pod deploy-xiuxian-pvc-default-sc-56c4948fb9-wjqtd | grep ClaimName
ClaimName: pvc-linux94
[root@master231 deployments]#
[root@master231 deployments]# kubectl get pvc pvc-linux94
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-linux94 Bound pvc-af3b36cc-cf89-4476-895c-7eeef3946564 3Gi RWX test-sc-haha 3m
[root@master231 deployments]#
[root@master231 deployments]# kubectl describe pv pvc-af3b36cc-cf89-4476-895c-7eeef3946564 | grep Source -A 5
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: nfs.csi.k8s.io
FSType:
VolumeHandle: 10.0.0.231#ysl/data/nfs-server/sc-haha#pvc-af3b36cc-cf89-4476-895c-7eeef3946564##
ReadOnly: false
[root@master231 deployments]#
[root@master231 deployments]# ll /ysl/data/nfs-server/sc-haha/pvc-af3b36cc-cf89-4476-895c-7eeef3946564/
total 12
drwxr-xr-x 2 root root 4096 Nov 27 14:53 ./
drwxr-xr-x 3 root root 4096 Nov 27 14:50 ../
-rw-r–r– 1 root root 18 Nov 27 14:53 index.html
[root@master231 deployments]#
[root@master231 deployments]# more /ysl/data/nfs-server/sc-haha/pvc-af3b36cc-cf89-4476-895c-7eeef3946564/index.html
www.ysl.com
[root@master231 deployments]#
13 local sc
– 为什么要使用local卷
1.使用PV存储数据的痛点
– 1.基于网络存储的PV通常性能损耗较大,因为要涉及到跨节点传输数据,尤其是pod和pv持久卷不在同一个节点的情况下;
– 2.直接使用节点本地的SSD磁盘可获取较好的IO性能,更适用于存储类的服务,例如MongoDB、Ceph,MySQL,ElasticSearch等。
2.hostPath解决pv的问题
这个时候你可能会想到使用hostPath来解决上面提到的问题,但是我们不得不承认以下的问题:
– 1.hostPath卷在Pod被重建后可能被调试至其它节点而无法再次使用此前的数据的情况;
– 2.hostPath卷允许Pod访问节点上的任意路径,也存在一定程度的安全风险;
3.local卷解决hostPath存储卷的不足
local卷插件主要用于将本地存储设备(如磁盘、分区或目录)配置为卷。
local卷相比hostPath而言,带来了如下优势:
– 1.基于local卷,调度器能自行完成调度绑定;
说白了就是当第一次完成调度后,后续重新创建Pod时都会被调度到该节点,因为K8S调度器之前系统有记录。
– 2.local卷只能关联静态置备的PV,目前尚不支持动态置备;
– 3.local卷支持”延迟绑定”特性,延迟绑定机制,提供了基于消费者的需求来判定将PVC绑定至哪个PV的可能性
说白了,在Pod完成调度之前,还没有消费pvc时,此时的pvc一直处于Pending状态。
举个例子:
Pod有3个节点可以满足调度,但由于Pod没有完成调度前,pvc并不知道调度到那个pv,
因为pv是关联的是local卷,其必须和pod在同一个节点才行。
4.local卷案例
1 创建local动态存储类
root@master231:/manifests/volumes# cat 18-local-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ysl-local-sc
# “kubernetes.io/no-provisioner”表示不使用动态置备PV,因为local插件不支持
provisioner: kubernetes.io/no-provisioner
# “WaitForFirstConsumer”表示等待消费者(Pod)申请使用PVC时(即第一次被调度时)再进行PV绑定,即“延迟绑定”
# 延迟绑定机制,提供了基于消费者的需求来判定将PVC绑定至哪个PV的可能性
# 说白了,在Pod完成调度之前,还没有消费pvc时,此时的pvc一直处于Pending状态。
# 举个例子:
# Pod有3个节点可以满足调度,但由于Pod没有完成调度前,pvc并不知道调度到那个pv,
# 因为pv是本地卷。其必须和pod在同一个节点才行。
volumeBindingMode: WaitForFirstConsumer
2 查看动态存储类(这个存储类无法自动创建pv,因为此配置为”kubernetes.io/no-provisioner”)
root@master231:/manifests/volumes# kubectl apply -f 18-local-sc.yaml
storageclass.storage.k8s.io/ysl-local-sc created
root@master231:/manifests/volumes# kubectl get sc ysl-local-sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ysl-local-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 8s
3 手动创建pv关联sc
root@master231:/manifests/volumes# kubectl apply -f 19-custom-pv.yaml
persistentvolume/yangsenlin-pv-worker232 created
persistentvolume/yangsenlin-pv-worker233 created
root@master231:/manifests/volumes# cat 19-custom-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: yangsenlin-pv-worker232
labels:
type: ssd
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
– ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
# 指定pv关联的存储类
storageClassName: ysl-local-sc
# 表示卷插件类型是local
local:
# 指定本地的路径,这个路径最好是对应的是一块本地磁盘,此路径最好提前创建好,若路径不存在,将来调度成功后会报错:
# MountVolume.NewMounter initialization failed for volume “…” : path “…” does not exist
path: /yangsenlin/data/sc/local01
# 声明pv与节点的关联关系,与此同时,调度器将基于nodeAffinity将影响Pod调度。
nodeAffinity:
required:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker232
—
apiVersion: v1
kind: PersistentVolume
metadata:
name: yangsenlin-pv-worker233
labels:
type: hdd
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
– ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: ysl-local-sc
local:
path: /yangsenlin/data/sc/local02
nodeAffinity:
required:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker233
root@master231:/manifests/volumes#
root@master231:/manifests/volumes# kubectl get pv -l type
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
yangsenlin-pv-worker232 10Gi RWO Delete Available ysl-local-sc 29s
yangsenlin-pv-worker233 20Gi RWO Delete Available ysl-local-sc 29s
4 worker节点创建对应目录
root@worker232:~/images# mkdir -pv /yangsenlin/data/sc/local01
mkdir: created directory ‘/yangsenlin’
mkdir: created directory ‘/yangsenlin/data’
mkdir: created directory ‘/yangsenlin/data/sc’
mkdir: created directory ‘/yangsenlin/data/sc/local01’
root@worker232:~/images#
root@worker233:~/images# mkdir -pv /yangsenlin/data/sc/local01
mkdir: created directory ‘/yangsenlin’
mkdir: created directory ‘/yangsenlin/data’
mkdir: created directory ‘/yangsenlin/data/sc’
mkdir: created directory ‘/yangsenlin/data/sc/local01’
5 创建pvc验证“延迟绑定”
root@master231:/manifests/local-sc# cat 03-custom-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: yangsenlin-pvc-local
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 5Gi
# pvc无需关联pv,关联sc即可
storageClassName: ysl-local-sc
root@master231:/manifests/local-sc# kubectl apply -f 03-custom-pvc.yaml
persistentvolumeclaim/yangsenlin-pvc-local created
root@master231:/manifests/local-sc#
root@master231:/manifests/local-sc# kubectl get -f 03-custom-pvc.yaml
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
yangsenlin-pvc-local Pending
6 创建pod并观察pvc状态
root@master231:/manifests/local-sc# cat 04-deploy-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian
spec:
replicas: 3
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: yangsenlin-pvc-local
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
ports:
– containerPort: 80
volumeMounts:
– mountPath: “/data”
name: data
root@master231:/manifests/local-sc# kubectl apply -f 04-deploy-xiuxian.yaml
deployment.apps/xiuxian created
root@master231:/manifests/local-sc#
[root@master231 local-sc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-7df7678dcf-kv5fw 1/1 Running 0 9s 10.100.2.38 worker233
xiuxian-7df7678dcf-lcvg4 1/1 Running 0 9s 10.100.2.40 worker233
xiuxian-7df7678dcf-x28bv 1/1 Running 0 9s 10.100.2.39 worker233
[root@master231 local-sc]#
[root@master231 local-sc]# kubectl get pvc yangsenlin-pvc-local
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
yangsenlin-pvc-local Bound yangsenlin-pv-worker233 20Gi RWO ysl-local-sc 2m18s
[root@master231 local-sc]#
– 如何解决local sc的弊端呢?
CSI: Container Storage Intaerface
参考官网:
https://openebs.io/docs/concepts/architecture
32 harbor实现https
1 环境准备
10.0.0.250 harbor.ysl.com
2core 4G
2 下载harbor
wget https://github.com/goharbor/harbor/releases/download/v2.12.2/harbor-offline-installer-v2.12.2.tgz
3 创建工作目录
root@harbor:~# mkdir -pv /harbor/{softwares,logs,data}
4 解压软件包
root@harbor:~# tar xf harbor-offline-installer-v2.12.2.tgz -C /harbor/softwares/
5 安装docker环境
[root@harbor.ysl.com ~]# tar xf autoinstall-docker-docker-compose.tar.gz
[root@harbor.ysl.com ~]# ./install-docker.sh i
6 配置CA证书
6.1 创建工作目录
root@harbor:~/docker# mkdir -pv /harbor/softwares/harbor/certs/{ca,harbor-server,docker-client}
6.2 进入到harbor证书存放目录
root@harbor:/harbor/softwares/harbor/certs# ll
total 20
drwxr-xr-x 5 root root 4096 Apr 10 22:41 ./
drwxr-xr-x 3 root root 4096 Apr 10 22:41 ../
drwxr-xr-x 2 root root 4096 Apr 10 22:41 ca/
drwxr-xr-x 2 root root 4096 Apr 10 22:41 docker-client/
drwxr-xr-x 2 root root 4096 Apr 10 22:41 harbor-server/
root@harbor:/harbor/softwares/harbor/certs#
6.3 生成自建ca证书
6.3.1 创建ca的私钥
root@harbor:/harbor/softwares/harbor/certs# openssl genrsa -out ca/ca.key 4096
6.3.2 基于自建的CA私钥创建CA证书(注意,证书签发的域名范围)
root@harbor:/harbor/softwares/harbor/certs# openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj “/C=CN/ST=Sichuan/L=Sichuan/O=example/OU=Personal/CN=ysl.com” \
-key ca/ca.key \
-out ca/ca.crt
6.3.3 查看自建证书信息
root@harbor:/harbor/softwares/harbor/certs# openssl x509 -in ca/ca.crt -noout -text
7 配置harbor证书
7.1 生成harbor服务器的私钥
root@harbor:/harbor/softwares/harbor/certs# openssl genrsa -out harbor-server/harbor.ysl.com.key 4096
7.2 harbor服务器基于私钥签发证书认证请求(csr文件),让自建CA认证
root@harbor:/harbor/softwares/harbor/certs# openssl req -sha512 -new \
-subj “/C=CN/ST=Sichuan/L=Sichuan/O=example/OU=Personal/CN=harbor.ysl.com” \
-key harbor-server/harbor.ysl.com.key \
7.3 生成x509 v3的扩展文件用于认证
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=harbor.ysl.com
EOF
7.4 基于 x509 v3 的扩展文件认证签发harbor server证书
root@harbor:/harbor/softwares/harbor/certs# openssl x509 -req -sha512 -days 3650 \
-extfile harbor-server/v3.ext \
-CA ca/ca.crt -CAkey ca/ca.key -CAcreateserial \
-in harbor-server/harbor.ysl.com.csr \
-out harbor-server/harbor.ysl.com.crt
7.5 修改harbor的配置文件使用自建证书
root@harbor:/harbor/softwares/harbor/certs# vim ../harbor.yml
…
hostname: harbor.ysl.com
https:
…
certificate: /harbor/softwares/harbor/certs/harbor-server/harbor.ysl.com.crt
private_key: /harbor/softwares/harbor/certs/harbor-server/harbor.ysl.com.key
…
harbor_admin_password: 1
…
data_volume: /harbor/data/harbor
…
8 安装harbor环境
root@harbor:/harbor/softwares/harbor/certs# ../install.sh
…..
✔ —-Harbor has been installed and started successfully.—-
9 windows访问测试
9.1 修改windows的hosts解析
192.168.137.250 harbor.ysl.com
9.2 访问测试
https://192.168.137.250/
使用admin/1登录
10 生成docker客户端证书
root@harbor:/harbor/softwares/harbor/certs# pwd
/harbor/softwares/harbor/certs
root@harbor:/harbor/softwares/harbor/certs# openssl x509 -inform PEM -in harbor-server/harbor.ysl.com.crt -out docker-client/harbor.ysl.com.cert
root@harbor:/harbor/softwares/harbor/certs# md5sum docker-client/harbor.ysl.com.cert harbor-server/harbor.ysl.com.crt
6488a8559b05764db9af81a67590a8de docker-client/harbor.ysl.com.cert
6488a8559b05764db9af81a67590a8de harbor-server/harbor.ysl.com.crt
root@harbor:/harbor/softwares/harbor/certs#
11 拷贝docker client证书文件
root@harbor:/harbor/softwares/harbor/certs# cp ca/ca.crt harbor-server/harbor.ysl.com.key docker-client/
12 将harbor的客户端证书拷贝到k8s集群
12.1 k8s集群每个节点新建证书的目录(域名的名称和目录要一样)
root@master231:~# mkdir -pv /etc/docker/certs.d/harbor.ysl.com/
root@master232:~# mkdir -pv /etc/docker/certs.d/harbor.ysl.com/
root@master233:~# mkdir -pv /etc/docker/certs.d/harbor.ysl.com/
12.2 从harbor仓库将证书文件拷贝到k8s集群
[root@harbor.ysl.com certs]# scp docker-client/* 10.0.0.231:/etc/docker/certs.d/harbor.ysl.com/
[root@harbor.ysl.com certs]# scp docker-client/* 10.0.0.232:/etc/docker/certs.d/harbor.ysl.com/
[root@harbor.ysl.com certs]# scp docker-client/* 10.0.0.233:/etc/docker/certs.d/harbor.ysl.com/
12.3 客户端验证
root@worker231:~# echo ‘10.0.0.250 harbor.ysl.com’ >> /etc/hosts
root@worker232:~# echo ‘10.0.0.250 harbor.ysl.com’ >> /etc/hosts
root@worker233:~# echo ‘10.0.0.250 harbor.ysl.com’ >> /etc/hosts
12.4 k8s集群登录harbor仓库
root@master231:~# docker login -u admin -p 1 harbor.ysl.com
WARNING! Using –password via the CLI is insecure. Use –password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
13 推送镜像到harbor仓库
13.1 给镜像打标签
root@master231:~# docker tag crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1 harbor.ysl.com/ysl-xiuxian/apps:v1
13.2 harbor web页面新建一个项目,项目名称为ysl-xiuxian
略
13.3 推送镜像到harbor仓库
The push refers to repository [harbor.ysl.com/ysl-xiuxian/apps]
8e2be8913e57: Pushed
9d5b000ce7c7: Pushed
b8dbe22b95f7: Pushed
c39c1c35e3e8: Pushed
5f66747c8a72: Pushed
15d7cdc64789: Pushed
7fcb75871b21: Pushed
v1: digest: sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c size: 1778
13.4 到harbor web查看推送的镜像
14 k8s从harbor仓库拉取镜像
root@master231:/manifests/volumes# cat 06-deploy-xiuxian-harbor.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-harbor
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
– emptyDir: {}
name: data
containers:
– name: c1
# image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
root@master231:/manifests/volumes# kubectl apply -f 06-deploy-xiuxian-harbor.yaml
root@master231:/manifests/volumes# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-xiuxian-harbor-6d99f57cb6-4p2cn 1/1 Running 0 31m
deploy-xiuxian-harbor-6d99f57cb6-fk767 1/1 Running 0 31m
deploy-xiuxian-harbor-6d99f57cb6-qwtx6 1/1 Running 0 31m
15 参考链接
https://goharbor.io/docs/1.10/install-config/configure-https/#generate-a-certificate-authority-certificate
https://www.cnblogs.com/yangsenlin/p/17153673.html
16 快速修复harbor服务
docker-compose down -t 0
docker-compose up -d
docker-compose ps -a
33 图形化管理工具
1 部署dashboard
是一款图形化管理K8S集群的解决方案。
参考链接:
https://github.com/kubernetes/dashboard/releases?page=7
1.1 下载资源清单
root@master231:~/manifests/add-ones/dashboard# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
1.2 修改8443的svc类型为LoadBalancer
1.3 部署服务
[root@master231 dashboard]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master231 dashboard]#
镜像下载地址:
http://192.168.15.253/Resources/Kubernetes/Add-ons/dashboard/
1.4 查看资源
[root@master231 dashboard]# kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-799d786dbf-xqm2f 1/1 Running 0 104s
pod/kubernetes-dashboard-fb8648fd9-g6szd 1/1 Running 0 104s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.200.4.60
service/kubernetes-dashboard LoadBalancer 10.200.50.252 10.0.0.152 443:14864/TCP 104s
[root@master231 dashboard]#
1.5 访问Dashboard
https://10.0.0.152/#/login
1.6 基于token登录
1.6.1 创建sa
root@master231:~/manifests/add-ones/dashboard# kubectl create serviceaccount ysl
serviceaccount/ysl created
root@master231:~/manifests/add-ones/dashboard#
1.6.2 将sa和内置集群角色绑定
root@master231:~/manifests/add-ones/dashboard# kubectl create clusterrolebinding dashboard-ysl –clusterrole=cluster-admin –serviceaccount=default:ysl -o yaml –dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: dashboard-ysl
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: ServiceAccount
name: ysl
namespace: default
root@master231:~/manifests/add-ones/dashboard#
root@master231:~/manifests/add-ones/dashboard# kubectl create clusterrolebinding dashboard-ysl –clusterrole=cluster-admin –serviceaccount=default:ysl
clusterrolebinding.rbac.authorization.k8s.io/dashboard-ysl created
root@master231:~/manifests/add-ones/dashboard#
1.6.3 使用token登录
root@master231:~/manifests/add-ones/dashboard# kubectl get secrets `kubectl get sa ysl -o jsonpath='{.secrets[0].name}’` -o jsonpath='{.data.token}’ |base64 -d ; echo
eyJhbGciOiJSUzI1NiIsImtpZCI6InhRdmpiTWw2WTlrVG04SjlGOTI0a0hrMXVjZUhWUmV6bDc0LVlhZll1MUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InlzbC10b2tlbi1ucmQ5MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJ5c2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0OTlhMmRjMS0wNTA5LTRkY2ItYjIwNy03MGM2OWNhMTBmYmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp5c2wifQ.1WWAYf9i9ChqYGfbmoJSyqImw5D7hrUZYbznqWk8ExTJFoTiX5_X7ktbZoVxlRwp7oKvepiYn45I8Gw0IAxIZMriMdzAgjcxXTyvYzt4OP2CDKiCwCoGEEJ8HspBCPtXCfIyDejeOawaizDvQLqLwWT5RWg2uy7SEogZpJlfubx3tXW6Z81hoMHFhK9vTPb8BOYpmEHYLHe1ZkW89Zm21A9DKe0yFQo0mIbSvX8OQaluUlyMIKeL3kR2kkdHAbdjWJ7Vlfop5un5SJJZwLeLKlJsTobJJ2uO1FD43G0kLsIz_ITzxVVZ3IN7g0txm2ofyzh3RU2GLTYbWVM_QA0ryA
1.7 使用kubeconfig授权登录
root@master231:~/manifests/add-ones/dashboard# cat ysl-generate-tontext-conf.sh
#!/bin/bash
# 获取secret的名称
SECRET_NAME=`kubectl get sa ysl -o jsonpath='{.secrets[0].name}’`
# 指定API SERVER的地址
API_SERVER=10.0.0.231:6443
# 指定kubeconfig配置文件的路径名称
KUBECONFIG_NAME=./yangsenlin-k8s-dashboard-admin.conf
# 获取ysl用户的tocken
ysl_TOCKEN=`kubectl get secrets $SECRET_NAME -o jsonpath={.data.token} | base64 -d`
# 在kubeconfig配置文件中设置群集项
kubectl config set-cluster yangsenlin-k8s-dashboard-cluster –server=$API_SERVER –kubeconfig=$KUBECONFIG_NAME
# 在kubeconfig中设置用户项
kubectl config set-credentials yangsenlin-k8s-dashboard-user –token=$ysl_TOCKEN –kubeconfig=$KUBECONFIG_NAME
# 配置上下文,即绑定用户和集群的上下文关系,可以将多个集群和用户进行绑定哟~
kubectl config set-context yangsenlin-admin –cluster=yangsenlin-k8s-dashboard-cluster –user=ysl-k8s-dashboard-user –kubeconfig=$KUBECONFIG_NAME
# 配置当前使用的上下文
kubectl config use-context yangsenlin-admin –kubeconfig=$KUBECONFIG_NAME
root@master231:~/manifests/add-ones/dashboard# bash ysl-generate-tontext-conf.sh
Cluster “yangsenlin-k8s-dashboard-cluster” set.
User “yangsenlin-k8s-dashboard-user” set.
Context “yangsenlin-admin” created.
Switched to context “yangsenlin-admin”.
root@master231:~/manifests/add-ones/dashboard# ll
total 28
drwxr-xr-x 3 root root 4096 May 2 19:24 ./
drwxr-xr-x 5 root root 4096 May 2 11:16 ../
drwxr-xr-x 2 root root 4096 May 2 11:27 images/
-rw-r–r– 1 root root 7642 May 2 11:24 recommended.yaml
-rw——- 1 root root 1248 May 2 19:24 yangsenlin-k8s-dashboard-admin.conf
-rw-r–r– 1 root root 1063 May 2 19:23 ysl-generate-tontext-conf.sh
root@master231:~/manifests/add-ones/dashboard# cat yangsenlin-k8s-dashboard-admin.conf
apiVersion: v1
clusters:
– cluster:
server: 10.0.0.231:6443
name: yangsenlin-k8s-dashboard-cluster
contexts:
– context:
cluster: yangsenlin-k8s-dashboard-cluster
user: ysl-k8s-dashboard-user
name: yangsenlin-admin
current-context: yangsenlin-admin
kind: Config
preferences: {}
users:
– name: yangsenlin-k8s-dashboard-user
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InhRdmpiTWw2WTlrVG04SjlGOTI0a0hrMXVjZUhWUmV6bDc0LVlhZll1MUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InlzbC10b2tlbi1ucmQ5MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJ5c2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0OTlhMmRjMS0wNTA5LTRkY2ItYjIwNy03MGM2OWNhMTBmYmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp5c2wifQ.1WWAYf9i9ChqYGfbmoJSyqImw5D7hrUZYbznqWk8ExTJFoTiX5_X7ktbZoVxlRwp7oKvepiYn45I8Gw0IAxIZMriMdzAgjcxXTyvYzt4OP2CDKiCwCoGEEJ8HspBCPtXCfIyDejeOawaizDvQLqLwWT5RWg2uy7SEogZpJlfubx3tXW6Z81hoMHFhK9vTPb8BOYpmEHYLHe1ZkW89Zm21A9DKe0yFQo0mIbSvX8OQaluUlyMIKeL3kR2kkdHAbdjWJ7Vlfop5un5SJJZwLeLKlJsTobJJ2uO1FD43G0kLsIz_ITzxVVZ3IN7g0txm2ofyzh3RU2GLTYbWVM_QA0ryA
root@master231:~/manifests/add-ones/dashboard#
2 kuboard
官网地址:
https://kuboard.cn/
2.1 部署kuboard
root@master231:~/manifests/add-ones/dashboard# wget https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
root@master231:~/manifests/add-ones/kuboard# kubectl apply -f kuboard-v3-swr.yaml
namespace/kuboard created
configmap/kuboard-v3-config created
serviceaccount/kuboard-boostrap created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-boostrap-crb created
daemonset.apps/kuboard-etcd created
deployment.apps/kuboard-v3 created
service/kuboard-v3 created
root@master231:~/manifests/add-ones/kuboard#
root@master231:~/manifests/add-ones/kuboard# kubectl -n kuboard get pods
root@master231:~/manifests/add-ones/kuboard# kubectl -n kuboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuboard-v3 NodePort 10.200.124.105
在浏览器中打开链接 http://10.0.0.233:30080
输入初始用户名和密码,并登录
用户名: admin
密码: Kuboard123
3 kubesphere
3.1 安装helm
[root@master231 kubesphere]# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
3.2 配置helm的自动补全功能
[root@master231 kubesphere]# helm completion bash > /etc/bash_completion.d/helm
[root@master231 kubesphere]# source /etc/bash_completion.d/helm
[root@master231 kubesphere]# echo ‘source /etc/bash_completion.d/helm’ >> ~/.bashrc
3.3 在集群节点,执行以下命令安装 KubeSphere Core。
[root@master231 ~]# helm upgrade –install -n kubesphere-system –create-namespace ks-core https://charts.kubesphere.com.cn/main/ks-core-1.1.3.tgz –debug –wait
…
NOTES:
Thank you for choosing KubeSphere Helm Chart.
Please be patient and wait for several seconds for the KubeSphere deployment to complete.
1. Wait for Deployment Completion
Confirm that all KubeSphere components are running by executing the following command:
kubectl get pods -n kubesphere-system
2. Access the KubeSphere Console
Once the deployment is complete, you can access the KubeSphere console using the following URL:
http://10.0.0.231:30880
3. Login to KubeSphere Console
Use the following credentials to log in:
Account: admin
Password: P@88w0rd
NOTE: It is highly recommended to change the default password immediately after the first login.
For additional information and details, please visit https://kubesphere.io.
温馨提示:
如果安装失败,需要手动导入镜像。
3.4 登录kubesphere的WebUI
http://10.0.0.231:30880
34 wordpress
root@master231:~/manifests/wordpress# cat 01-deploy-mysql.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-db
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 2Gi
requests:
storage: 1Gi
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql
spec:
replicas: 1
selector:
matchLabels:
apps: db
template:
metadata:
labels:
apps: db
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-db
– name: datetime
hostPath:
path: /etc/localtime
restartPolicy: Always
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
values:
– worker232
operator: In
containers:
– name: c1
image: harbor.ysl.com/wordpress/mysql:8.0.36-oracle
volumeMounts:
– name: datetime
mountPath: /etc/localtime
– name: data
mountPath: /var/lib/mysql
imagePullPolicy: IfNotPresent
args:
– –character-set-server=utf8
– –collation-server=utf8_bin
– –default-authentication-plugin=mysql_native_password
ports:
– containerPort: 3306
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “yes”
– name: MYSQL_DATABASE
value: ceshi
– name: MYSQL_USER
value: test
– name: MYSQL_PASSWORD
value: “123456”
root@master231:~/manifests/wordpress# cat 02-deploy-wordpress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-wordpress
spec:
replicas: 1
selector:
matchLabels:
apps: wp
template:
metadata:
labels:
apps: wp
spec:
hostNetwork: true
restartPolicy: Always
nodeSelector:
kubernetes.io/hostname: worker233
containers:
– name: c1
image: harbor.ysl.com/wordpress/wordpress:6.7.1-php8.1-apache
ports:
– containerPort: 80
env:
– name: WORDPRESS_DB_HOST
value: 10.0.0.232
– name: WORDPRESS_DB_NAME
value: ceshi
– name: WORDPRESS_DB_USER
value: test
– name: WORDPRESS_DB_PASSWORD
value: “123456”
volumeMounts:
– name: datetime
mountPath: /etc/localtime
– name: data
mountPath: /var/www/html
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-wp
– name: datetime
hostPath:
path: /etc/localtime
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-wp
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 500Mi
requests:
storage: 200Mi
– WordPress基于svc
[root@master231 v2]# cat 01-deploy-mysql.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-db
namespace: kube-public
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 2Gi
requests:
storage: 1Gi
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql
namespace: kube-public
spec:
replicas: 1
selector:
matchLabels:
apps: db
template:
metadata:
labels:
apps: db
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-db
– name: datetime
hostPath:
path: /etc/localtime
restartPolicy: Always
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
values:
– worker232
operator: In
containers:
– name: c1
image: harbor.ysl.com/wordpress/mysql:8.0.36-oracle
volumeMounts:
– name: datetime
mountPath: /etc/localtime
– name: data
mountPath: /var/lib/mysql
imagePullPolicy: IfNotPresent
args:
– –character-set-server=utf8
– –collation-server=utf8_bin
– –default-authentication-plugin=mysql_native_password
ports:
– containerPort: 3306
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “yes”
– name: MYSQL_DATABASE
value: ceshi
– name: MYSQL_USER
value: test
– name: MYSQL_PASSWORD
value: “123456”
—
apiVersion: v1
kind: Service
metadata:
name: svc-db
namespace: kube-public
spec:
type: ClusterIP
selector:
apps: db
ports:
– port: 3306
[root@master231 v2]#
[root@master231 v2]#
[root@master231 v2]#
[root@master231 v2]# cat 02-deploy-wordpress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-wordpress
spec:
replicas: 1
selector:
matchLabels:
apps: wp
template:
metadata:
labels:
apps: wp
spec:
restartPolicy: Always
containers:
– name: c1
image: harbor.ysl.com/wordpress/wordpress:6.7.1-php8.1-apache
ports:
– containerPort: 80
env:
– name: WORDPRESS_DB_HOST
# 指定数据库svc的名称,CoreDNS组件会自动解析其对应的ClusterIP
# value: svc-db
# 如果不指定名称空间,则默认会在当前资源所属的名称空间找。
# value: svc-db.kube-public
# 当然,我们也可以配置完整的A记录格式
value: svc-db.kube-public.svc.ysl.com
– name: WORDPRESS_DB_NAME
value: ceshi
– name: WORDPRESS_DB_USER
value: test
– name: WORDPRESS_DB_PASSWORD
value: “123456”
volumeMounts:
– name: datetime
mountPath: /etc/localtime
– name: data
mountPath: /var/www/html
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-wp
– name: datetime
hostPath:
path: /etc/localtime
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-wp
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 500Mi
requests:
storage: 200Mi
—
apiVersion: v1
kind: Service
metadata:
name: svc-wp
namespace: default
spec:
type: LoadBalancer
selector:
apps: wp
ports:
– port: 80
[root@master231 v2]#
35 pv的回收策略
Retain:
“保留回收”策略允许手动回收资源,删除pvc时,pv仍然存在,并且该卷被视为”已释放(Released)”。
在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
温馨提示:
(1)在k8s1.15.12版本测试时,删除pvc发现nfs存储卷的数据并不会被删除,pv也不会被删除;
Delete:
对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。建议使用动态存储类(sc)实现,才能看到效果哟!
对于AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder等存储卷会被删除。
温馨提示:
(1)在k8s1.15.12版本测试时,在不使用sc时,则删除pvc发现nfs存储卷的数据并不会被删除;
(2)在k8s1.15.12版本测试时,在使用sc后,可以看到删除效果哟;
Recycle:
对于”回收利用”策略官方已弃用。相反,推荐的方法是使用动态资源调配。而动态存储类已经不支持该类型啦!
如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
温馨提示,在k8s1.15.12版本测试时,删除pvc发现nfs存储卷的数据被删除。
温馨提示:
对于动态存储类中的配置中,多了一个”archive”,表示归档pv。
[root@master231 csi-driver-nfs]# cat deploy/v4.9.0/storageclass.yaml
—
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
# server: nfs-server.default.svc.cluster.local
server: 10.0.0.231
# share: /
share: /ysl/data/nfs-server/sc
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: “mount-options”
# csi.storage.k8s.io/provisioner-secret-namespace: “default”
# 有效值为: delete retain archive,若不指定,则默认值为”delete”,当删除pvc时,会自动删除pv及后端的数据。
# onDelete: “archive”
# onDelete: “retain”
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
– nfsvers=4.1
[root@master231 csi-driver-nfs]#
36 名称空间
1 概述
k8s的名称空间是用来隔离K8S集群资源的。
但是有些资源不支持名称空间隔离,我们称之为全局资源,而支持名称空间的我们称之为局部资源。
2 查看资源是否支持名称空间
root@master231:~# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
NAMESPACED 为true的是支持名称空间的,为false是不支持名称空间的
3 查看现有的名称空间
root@master231:~# kubectl get ns
NAME STATUS AGE
default Active 11d
kube-flannel Active 11d
kube-node-lease Active 11d
kube-public Active 11d
kube-system Active 11d
kubesphere-system Active 89m
4 查看特定名称空间的资源
root@master231:~# kubectl get cm
NAME DATA AGE
game-demo 7 3d11h
kube-root-ca.crt 1 11d
nginx-subfile 2 2d11h
test-xy 4 2d11h
ysl-cm 2 2d3h
root@master231:~# kubectl get cm –namespace default
root@master231:~# kubectl get cm -n default
root@master231:~# kubectl get cm -n kube-system
root@master231:~# kubectl get cm –namespace kube-system
root@master231:~# kubectl -n kube-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
etcd-master231 1/1 Running 16 (3h57m ago) 11d 10.0.0.231 master231
kube-apiserver-master231 1/1 Running 17 (3h57m ago) 11d 10.0.0.231 master231
kube-controller-manager-master231 1/1 Running 16 (3h57m ago) 11d 10.0.0.231 master231
kube-proxy-9zlc6 1/1 Running 16 (3h57m ago) 11d 10.0.0.231 master231
kube-proxy-hn6g6 1/1 Running 23 (124m ago) 11d 10.0.0.233 worker233
kube-proxy-ngs5g 1/1 Running 8 (3h57m ago) 6d23h 10.0.0.232 worker232
kube-scheduler-master231 1/1 Running 16 (3h57m ago) 11d 10.0.0.231 master231
5 查看所有名称空间的资源
root@master231:~# kubectl get pods –all-namespaces
root@master231:~# kubectl get pods -A
root@master231:~# kubectl get cm -A
6 响应式创建名称空间
root@master231:~# kubectl create namespace ysl
namespace/ysl created
root@master231:~# kubectl get ns|grep ysl
ysl Active 12s
7 声明式创建名称空间
root@master231:~/manifests# cat 01-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-ns
root@master231:~/manifests# kubectl apply -f 01-ns.yaml
namespace/test-ns created
root@master231:~/manifests# kubectl get -f 01-ns.yaml
NAME STATUS AGE
test-ns Active 8s
root@master231:~/manifests#
8 删除名称空间
root@master231:~/manifests# kubectl delete ns ysl test-ns
namespace “ysl” deleted
namespace “test-ns” deleted
9 提示
当删除名称空间时,该名称空间下的所有资源都会被删除!生产环境中谨慎操作!
10 将pod放在特点的名称空间
root@master231:~/manifests/pods# cat 23-pods-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: ns-xiuxian
#资源放在特定的名称空间
namespace: kube-public
spec:
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
root@master231:~/manifests/pods# kubectl apply -f 23-pods-ns.yaml
pod/ns-xiuxian configured
root@master231:~/manifests/pods# kubectl -n kube-public get pods
NAME READY STATUS RESTARTS AGE
ns-xiuxian 1/1 Running 0 79s
11 将cm放在特定的名称空间
root@master231:~/manifests/configmaps# cat 02-cm-ns.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
namespace: kube-public
data:
player_initial_lives: “3”
ui_properties_file_name: “user-interface.properties”
xingming: zhangsan
city: Sichuan
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
my.cnf: |
datadir=/ysl/data/mysql80
basedir=/ysl/softwares/mysql80
port=3306
socket=/tmp/mysql80.sock
root@master231:~/manifests/configmaps# kubectl apply -f 02-cm-ns.yaml
configmap/game-demo created
root@master231:~/manifests/configmaps# kubectl -n kube-public get cm
NAME DATA AGE
cluster-info 2 11d
game-demo 7 17s
kube-root-ca.crt 1 11d
12 有些资源不支持名称空间
比如pv,sc等都是不支持名称空间的,因此就算在资源清单中添加namespace字段也会被无视。
37 服务发现 Service
1 概述
服务发现包含以下四种
1.1 ClusterIP:
一般用于K8S集群内部应用服务之间的访问。
1.2 NodePort:
在CLusterIP的基础之上,对外部提供访问路由。
1.3 LoadBalancer:
一般用于云环境的K8S。
1.4 ExternalName:
一般映射K8S集群外部的服务到K8S集群内部,底层采用了CNAME技术。
1 ClusterIP
root@master231:~/manifests/services# cat 01-svc-clusterIP.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian
namespace: default
spec:
# 指定svc的类型,有效值: ExternalName, ClusterIP, NodePort, and LoadBalancer
# 若不指定,则默认值为:”ClusterIP”
type: ClusterIP
selector:
apps: v1
ports:
– name: xiuxian
port: 81
protocol: TCP
targetPort: 80
root@master231:~/manifests/services# kubectl apply -f 01-svc-clusterIP.yaml
service/svc-xiuxian configured
root@master231:~/manifests/services# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.200.0.1
svc-xiuxian ClusterIP 10.200.224.60
2 NodePort
root@master231:~/manifests/services# cat 02-svc-nodePort.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-nodeport
namespace: default
spec:
# 指定svc的类型,有效值: ExternalName, ClusterIP, NodePort, and LoadBalancer
# 若不指定,则默认值为:”ClusterIP”
type: NodePort
# 指定svc的网段即可,我们集群安装时指定的网段: 10.200.0.0/16
clusterIP: 10.200.0.100
selector:
apps: v1
ports:
– name: xiuxian
port: 81
protocol: TCP
targetPort: 80
# 默认的端口范围为: 30000-32767
nodePort: 30080
root@master231:~/manifests/services# kubectl apply -f 02-svc-nodePort.yaml
service/svc-xiuxian-nodeport created
root@master231:~/manifests/services# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.200.0.1
svc-xiuxian ClusterIP 10.200.224.60
svc-xiuxian-nodeport NodePort 10.200.0.100
root@master231:~/manifests/services#
root@master231:~/manifests/services# kubectl get svc svc-xiuxian -o yaml
root@master231:~/manifests/services# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.200.0.1
svc-xiuxian ClusterIP 10.200.224.60
svc-xiuxian-nodeport NodePort 10.200.0.100
集群外部访问三个节点的30080端口都可以
集群内部访问CLUSTER-IP的81端口即可
3 LoadBalancer
3.1 Metallb
如果我们需要在自己的Kubernetes中暴露LoadBalancer的应用,那么Metallb是一个不错的解决方案。
Metallb官网地址:
https://metallb.universe.tf/installation
3.2 修改cm配置
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e “s/strictARP: false/strictARP: true/” | \
sed -e ‘s#mode: “”#mode: “ipvs”#’ | \
kubectl apply -f – -n kube-system
3.3 删除Kube-proxy组件的pod,让cm的配置生效
root@master231:~# kubectl -n kube-system delete pods –all
root@master231:~# kubectl -n kube-system get pods
3.4 部署MetalLb
root@master231:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml
3.5 查看部署是否正常
root@master231:~# kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-7f77587954-gz425 1/1 Running 0 88s
speaker-f7bwt 1/1 Running 0 88s
speaker-p7z98 1/1 Running 0 88s
speaker-sp7fj 1/1 Running 0 88s
3.6 创建地址池
root@master231:~/manifests/metalLB# kubectl apply -f metallb-ip-pool.yaml
ipaddresspool.metallb.io/test-metallb created
l2advertisement.metallb.io/test created
3.7 创建svc
root@master231:~/manifests/services# cat 03-svc-LoadBalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-loadbalancer
namespace: default
spec:
# 指定svc的类型,有效值: ExternalName, ClusterIP, NodePort, and LoadBalancer
# 若不指定,则默认值为:”ClusterIP”
type: LoadBalancer
clusterIP: 10.200.0.200
selector:
apps: v1
ports:
– name: xiuxian
port: 81
protocol: TCP
targetPort: 80
nodePort: 30081
root@master231:~/manifests/services# kubectl apply -f 03-svc-LoadBalancer.yaml
service/svc-xiuxian-loadbalancer created
root@master231:~/manifests/services# kubectl get svc | grep load
svc-xiuxian-loadbalancer LoadBalancer 10.200.0.200 10.0.0.150 81:30081/TCP 31s
3.8 访问测试
http://10.0.0.150:81/
http://10.0.0.233:30081/
4 ExternalName
4.1 部署应用
root@master231:~/manifests/services# cat 04-svc-ExteralName.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-externalname
namespace: default
spec:
type: ExternalName
# 配置CNAME
externalName: www.baidu.com
ports:
– name: xiuxian
port: 81
protocol: TCP
targetPort: 80
nodePort: 30082
root@master231:~/manifests/services# kubectl apply -f 04-svc-ExteralName.yaml
service/svc-xiuxian-externalname created
4.2 解析测试
[root@master231 services]# dig @10.200.0.10 svc-xiuxian-externalname.default.svc.baidu.com +short
4.3 访问A记录格式
访问A记录格式:
举例说明:
svc-xiuxian-externalname.default.svc.baidu.com
4.4 ExternalName是不分配ClusterIP的
root@master231:~/manifests/services# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.200.0.1
svc-xiuxian ClusterIP 10.200.224.60
svc-xiuxian-externalname ExternalName
svc-xiuxian-loadbalancer LoadBalancer 10.200.0.200 10.0.0.150 81:30081/TCP 147m
svc-xiuxian-nodeport NodePort 10.200.0.100
5 svc和CoreDNS的关系
5.1 CoreDNS为svc提供解析功能【基于svc的名称解析到对应的CLusterIP或者后端的真实IP地址】
5.2 查看DNS的服务配置
root@master231:~/manifests/services# kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
5.3 阅读CoreDNS的资源清单
root@master231:~# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
– apiGroups:
– “”
resources:
– endpoints
– services
– pods
– namespaces
verbs:
– list
– watch
– apiGroups:
– discovery.k8s.io
resources:
– endpointslices
verbs:
– list
– watch
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: “true”
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
– kind: ServiceAccount
name: coredns
namespace: kube-system
—
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: “CoreDNS”
app.kubernetes.io/name: coredns
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
template:
metadata:
labels:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
– key: “CriticalAddonsOnly”
operator: “Exists”
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
– labelSelector:
matchExpressions:
– key: k8s-app
operator: In
values: [“kube-dns”]
topologyKey: kubernetes.io/hostname
containers:
– name: coredns
image: registry.aliyuncs.com/google_containers/coredns:v1.8.6
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ “-conf”, “/etc/coredns/Corefile” ]
volumeMounts:
– name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
– containerPort: 53
name: dns
protocol: UDP
– containerPort: 53
name: dns-tcp
protocol: TCP
– containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
– NET_BIND_SERVICE
drop:
– all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
– name: config-volume
configMap:
name: coredns
items:
– key: Corefile
path: Corefile
—
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: “9153”
prometheus.io/scrape: “true”
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: “true”
kubernetes.io/name: “CoreDNS”
app.kubernetes.io/name: coredns
spec:
selector:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
clusterIP: 10.200.0.10
ports:
– name: dns
port: 53
protocol: UDP
targetPort: 53
– name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
– name: metrics
port: 9153
protocol: TCP
targetPort: 9153
root@master231:~#
root@master231:~# kubectl -n kube-system get cm coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:”v1″,”data”:{“Corefile”:”.:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n loop\n reload\n loadbalance\n}\n”},”kind”:”ConfigMap”,”metadata”:{“annotations”:{},”name”:”coredns”,”namespace”:”kube-system”}}
creationTimestamp: “2025-04-04T11:30:50Z”
name: coredns
namespace: kube-system
resourceVersion: “619273”
uid: 38fb3251-ee68-4a93-861d-886f0c139671
5.4 查看kubelet组件
root@master231:~# cat /var/lib/kubelet/config.yaml
5.5 dnsconfig自定义DNS配置信息
root@master231:~/manifests/coreDNS# cat 01-deploy-dnsConfig.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-dnsconfig
spec:
replicas: 1
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
address: chaoyang
city: Sichuan
spec:
# 为Pod配置DNS相关信息
dnsConfig:
# 指定DNS服务器
nameservers:
– 223.5.5.5
– 223.6.6.6
# 定义搜索域
searches:
– yangsenlin.top
– baidu.com
# 配置DNS解析选项
options:
– name: edns0
value: trust-ad
– name: ndots
value: “5”
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
root@master231:~/manifests/coreDNS# kubectl apply -f 01-deploy-dnsConfig.yaml
deployment.apps/deploy-dnsconfig created
root@master231:~/manifests/coreDNS# kubectl exec -it deploy-dnsconfig-855c67c5b7-pf4kd — sh
/ # cat /etc/resolv.conf
nameserver 10.200.0.10
nameserver 223.5.5.5
nameserver 223.6.6.6
search default.svc.yangsenlin.top svc.yangsenlin.top yangsenlin.top baidu.com
options ndots:5 edns0:trust-ad
5.6 dnspolicy和hostNetwork
root@master231:~/manifests/coreDNS# cat 02-deploy-hostNetwork-dnsPolicy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-hostnetwork
spec:
replicas: 1
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
address: shahe
city: Sichuan
spec:
hostNetwork: true
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
root@master231:~/manifests/coreDNS# kubectl apply -f 02-deploy-hostNetwork-dnsPolicy.yaml
deployment.apps/deploy-hostnetwork created
root@master231:~/manifests/coreDNS# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-dnsconfig-855c67c5b7-pf4kd 1/1 Running 0 9m4s
deploy-hostnetwork-85ff5498ff-7k7qm 1/1 Running 0 65s
root@master231:~/manifests/coreDNS# kubectl exec -it deploy-hostnetwork-85ff5498ff-7k7qm — sh
/ # cat /etc/resolv.conf
nameserver 223.5.5.5
nameserver 223.6.6.6
search
/ # ping svc-xiuxian-nodeport
ping: bad address ‘svc-xiuxian-nodeport’
root@master231:~/manifests/coreDNS# kubectl delete -f 02-deploy-hostNetwork-dnsPolicy.yaml
deployment.apps “deploy-hostnetwork” deleted
root@master231:~/manifests/coreDNS# vi 02-deploy-hostNetwork-dnsPolicy.yaml
root@master231:~/manifests/coreDNS# cat 02-deploy-hostNetwork-dnsPolicy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-hostnetwork
spec:
replicas: 1
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
address: shahe
city: Sichuan
spec:
hostNetwork: true
# 配置DNS策略,有效值: ‘ClusterFirstWithHostNet’, ‘ClusterFirst’, ‘Default’ or ‘None’
# 默认值为:”ClusterFirst”
dnsPolicy: ClusterFirstWithHostNet
containers:
– name: c1
image: harbor.ysl.com/ysl-xiuxian/apps:v1
root@master231:~/manifests/coreDNS# kubectl apply -f 02-deploy-hostNetwork-dnsPolicy.yaml
deployment.apps/deploy-hostnetwork created
root@master231:~/manifests/coreDNS# kubectl exec -it deploy-hostnetwork-6f64688bbb-5n2n7 — sh
/ # cat /etc/resolv.conf
nameserver 10.200.0.10
search default.svc.yangsenlin.top svc.yangsenlin.top yangsenlin.top
options ndots:5
/ # ping svc-xiuxian-nodeport
6 无头服务
6.1 headless Service
本质上还是一个Service,只不过该Service没有ClusterIP而已。
一般会结合sts资源使用。
6.2 资源清单
[root@master231 services]# cat 05-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-headless
namespace: default
spec:
type: ClusterIP
# 若没有IP地址,则表示的是一个无头服务
clusterIP: None
selector:
apps: xiuxian
ports:
– name: xiuxian
port: 81
protocol: TCP
targetPort: 80
7 endpionts映射K8S集群外部服务
7.1 svc和ep的关系
除了ExternalName类型外,当创建其他类型的svc时,都会自动关联一个同名称的ep资源。
当删除svc时,会自动删除同名称的ep资源。
7.2 svc会自动关联ep
root@master231:~/manifests/services# kubectl apply -f 05-svc-headless.yaml
service/svc-headless created
root@master231:~/manifests/services# kubectl describe svc svc-headless
Name: svc-headless
Namespace: default
Labels:
Annotations:
Selector: apps=xiuxian
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: xiuxian 81/TCP
TargetPort: 80/TCP
Endpoints: 10.0.0.232:80,10.100.2.15:80
Session Affinity: None
Events:
root@master231:~/manifests/services#
root@master231:~/manifests/services# kubectl get ep svc-headless
NAME ENDPOINTS AGE
svc-headless 10.0.0.232:80,10.100.2.15:80 64s
7.3 将k8s外部的服务映射到k8s集群内部
root@worker233:~# docker run -d -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=test -e MYSQL_USER=test -e MYSQL_PASSWORD=”123456″ –name mysql-server –network host harbor.ysl.com/wordpress/mysql:8.0.36-oracle
b2975312782cd0d689ec49cb38ba688674fb3f785ae68815619ed126f66addab
root@worker233:~# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2975312782c harbor.ysl.com/wordpress/mysql:8.0.36-oracle “docker-entrypoint.s…” 22 seconds ago Up 20 seconds mysql-server
root@worker233:~# ss -ntl | grep 3306
LISTEN 0 70 *:33060 *:*
LISTEN 0 151 *:3306 *:*
7.4 k8s集群添加ep资源映射外部服务
root@master231:~/manifests/endpoints# cat 01-ep-mysql.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-mysql
subsets:
– addresses:
– ip: 10.0.0.250
ports:
– port: 3306
—
apiVersion: v1
kind: Service
metadata:
name: harbor-mysql
spec:
type: ClusterIP
ports:
– port: 3306
root@master231:~/manifests/endpoints# kubectl apply -f 01-ep-mysql.yaml
endpoints/harbor-mysql created
service/harbor-mysql created
[root@master231 endpoints]# kubectl describe svc harbor-mysql | grep Endpoints
Endpoints: 10.0.0.250:3306
[root@master231 endpoints]#
3.3 改写wordpress案例
[root@master231 v3]# cat deploy-wordpress.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-mysql
subsets:
– addresses:
– ip: 10.0.0.250
ports:
– port: 3306
—
apiVersion: v1
kind: Service
metadata:
name: harbor-mysql
spec:
type: ClusterIP
ports:
– port: 3306
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-wordpress
spec:
replicas: 1
selector:
matchLabels:
apps: wp
template:
metadata:
labels:
apps: wp
spec:
restartPolicy: Always
containers:
– name: c1
image: harbor.ysl.com/wordpress/wordpress:6.7.1-php8.1-apache
ports:
– containerPort: 80
env:
– name: WORDPRESS_DB_HOST
value: harbor-mysql
– name: WORDPRESS_DB_NAME
value: ceshi
– name: WORDPRESS_DB_USER
value: test
– name: WORDPRESS_DB_PASSWORD
value: “123456”
volumeMounts:
– name: datetime
mountPath: /etc/localtime
– name: data
mountPath: /var/www/html
volumes:
– name: data
persistentVolumeClaim:
claimName: pvc-wp
– name: datetime
hostPath:
path: /etc/localtime
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-wp
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 500Mi
requests:
storage: 200Mi
—
apiVersion: v1
kind: Service
metadata:
name: svc-wp
namespace: default
spec:
type: LoadBalancer
selector:
apps: wp
ports:
– port: 80
[root@master231 v3]#
38 kube-proxy
– kube-proxy的底层工作方式
– iptables:
随着规则的增多效率会降低。
– ipvs (推荐)
在读取规则时,其时间复杂度更低,性能更高,生产推荐使用。
root@master231:~/manifests/coreDNS# kubectl -n kube-system get ds kube-proxy -o yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: “1”
creationTimestamp: “2025-04-04T11:30:50Z”
generation: 1
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: “605353”
uid: e22a01ce-6250-4c2f-ab4f-b456c638d9af
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
– command:
– /usr/local/bin/kube-proxy
– –config=/var/lib/kube-proxy/config.conf
– –hostname-override=$(NODE_NAME)
env:
– name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: registry.aliyuncs.com/google_containers/kube-proxy:v1.23.17
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
– mountPath: /var/lib/kube-proxy
name: kube-proxy
– mountPath: /run/xtables.lock
name: xtables-lock
– mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-proxy
serviceAccountName: kube-proxy
terminationGracePeriodSeconds: 30
tolerations:
– operator: Exists
volumes:
– configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
– hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
– hostPath:
path: /lib/modules
type: “”
name: lib-modules
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 1
updatedNumberScheduled: 3
root@master231:~/manifests/coreDNS#
root@master231:~/manifests/coreDNS# kubectl -n kube-system get cm kube-proxy -o yaml
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: “”
burst: 0
contentType: “”
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocalMode: “”
enableProfiling: false
healthzBindAddress: “”
hostnameOverride: “”
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: “”
strictARP: true
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: “”
mode: “ipvs”
nodePortAddresses: null
oomScoreAdj: null
portRange: “”
showHiddenMetricsForVersion: “”
udpIdleTimeout: 0s
winkernel:
enableDSR: false
networkName: “”
sourceVip: “”
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
– cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://10.0.0.231:6443
name: default
contexts:
– context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
– name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
annotations:
kubeadm.kubernetes.io/component-config.hash: sha256:aeeb159f672e219d96aa2cc4017cc8f204922445a7014c4c3e9118fe44b87345
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:”v1″,”data”:{“config.conf”:”apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nbindAddressHardFail: false\nclientConnection:\n acceptContentTypes: \”\”\n burst: 0\n contentType: \”\”\n kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n qps: 0\nclusterCIDR: 10.100.0.0/16\nconfigSyncPeriod: 0s\nconntrack:\n maxPerCore: null\n min: null\n tcpCloseWaitTimeout: null\n tcpEstablishedTimeout: null\ndetectLocalMode: \”\”\nenableProfiling: false\nhealthzBindAddress: \”\”\nhostnameOverride: \”\”\niptables:\n masqueradeAll: false\n masqueradeBit: null\n minSyncPeriod: 0s\n syncPeriod: 0s\nipvs:\n excludeCIDRs: null\n minSyncPeriod: 0s\n scheduler: \”\”\n strictARP: true\n syncPeriod: 0s\n tcpFinTimeout: 0s\n tcpTimeout: 0s\n udpTimeout: 0s\nkind: KubeProxyConfiguration\nmetricsBindAddress: \”\”\nmode: \”ipvs\”\nnodePortAddresses: null\noomScoreAdj: null\nportRange: \”\”\nshowHiddenMetricsForVersion: \”\”\nudpIdleTimeout: 0s\nwinkernel:\n enableDSR: false\n networkName: \”\”\n sourceVip: \”\””,”kubeconfig.conf”:”apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n server: https://10.0.0.231:6443\n name: default\ncontexts:\n- context:\n cluster: default\n namespace: default\n user: default\n name: default\ncurrent-context: default\nusers:\n- name: default\n user:\n tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token”},”kind”:”ConfigMap”,”metadata”:{“annotations”:{“kubeadm.kubernetes.io/component-config.hash”:”sha256:aeeb159f672e219d96aa2cc4017cc8f204922445a7014c4c3e9118fe44b87345″},”creationTimestamp”:”2025-04-04T11:30:50Z”,”labels”:{“app”:”kube-proxy”},”name”:”kube-proxy”,”namespace”:”kube-system”,”resourceVersion”:”238″,”uid”:”0b0fc09d-780d-4d1e-9737-0a09a020c5e8″}}
creationTimestamp: “2025-04-04T11:30:50Z”
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: “605097”
uid: 0b0fc09d-780d-4d1e-9737-0a09a020c5e8
root@master231:~/manifests/coreDNS#
39 sts有状态服务
1.sts控制器
K8S 1.9+引入,主要作用就是部署有状态服务,自带特性:
– 1.每个Pod有独立的存储;
– 2.每个Pod有唯一的网络标识;
– 3.有顺序的启动和停止Pod;
所谓的有状态服务,指的是服务在启动时逻辑并不相同,比如部署MySQL主从,第一个启动更多Pod逻辑和第二个Pod启动逻辑并不相通。
2 sts部署mysql主从
root@master231:~/manifests/mysql-master-slave# cat 01-cm-mysql.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql80
data:
master.cnf: |
[mysqld]
server_id=111
skip-host-cache
skip-name-resolve
log-bin
character-set-server=utf8mb4
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8mb4
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
slave.cnf: |
[mysqld]
server_id=222
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
super-read-only
character-set-server=utf8mb4
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8mb4
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
root@master231:~/manifests/mysql-master-slave# cat 02-pvc-mysql.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-master
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 2Gi
requests:
storage: 1Gi
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-slave
spec:
storageClassName: nfs-csi
accessModes:
– ReadWriteMany
resources:
limits:
storage: 2Gi
requests:
storage: 1Gi
root@master231:~/manifests/mysql-master-slave# cat 03-svc-mysql-master.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-master
spec:
type: ClusterIP
selector:
apps: master
ports:
– port: 3306
root@master231:~/manifests/mysql-master-slave# cat 04-deploy-mysql-master.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql-master
spec:
replicas: 1
selector:
matchLabels:
apps: master
template:
spec:
volumes:
– name: dt
hostPath:
path: /etc/localtime
– name: master
persistentVolumeClaim:
claimName: pvc-master
– name: conf
configMap:
name: mysql80
items:
– key: master.cnf
path: my.cnf
containers:
– name: db
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/wordpress:mysql-8.0.36-oracle
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “yes”
– name: MYSQL_USER
value: test
– name: MYSQL_PASSWORD
value: yangsenlin
volumeMounts:
– name: master
mountPath: /var/lib/mysql
– name: conf
mountPath: /etc/my.cnf
subPath: my.cnf
– name: dt
mountPath: /etc/localtime
args:
– –character-set-server=utf8mb4
– –collation-server=utf8mb4_bin
– –default-authentication-plugin=mysql_native_password
metadata:
labels:
apps: master
root@master231:~/manifests/mysql-master-slave# cat 05-deploy-mysql-slave.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql-slave
spec:
replicas: 1
selector:
matchLabels:
apps: slave
template:
spec:
volumes:
– name: dt
hostPath:
path: /etc/localtime
– name: slave
persistentVolumeClaim:
claimName: pvc-slave
– name: conf
configMap:
name: mysql80
items:
– key: slave.cnf
path: my.cnf
containers:
– name: db
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/wordpress:mysql-8.0.36-oracle
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “yes”
volumeMounts:
– name: slave
mountPath: /var/lib/mysql
– name: conf
mountPath: /etc/my.cnf
subPath: my.cnf
– name: dt
mountPath: /etc/localtime
args:
– –character-set-server=utf8mb4
– –collation-server=utf8mb4_bin
– –default-authentication-plugin=mysql_native_password
metadata:
labels:
apps: slave
root@master231:~/manifests/mysql-master-slave#
root@master231:~/manifests/mysql-master-slave# kubectl apply -f .
configmap/mysql80 created
persistentvolumeclaim/pvc-master created
persistentvolumeclaim/pvc-slave created
service/svc-master created
deployment.apps/deploy-mysql-master created
deployment.apps/deploy-mysql-slave created
root@master231:~/manifests/mysql-master-slave# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-mysql-master-bbcd77749-x6ktb 0/1 ContainerCreating 0 4s
deploy-mysql-slave-5cddb648d9-r7r5q 0/1 ContainerCreating 0 4s
root@master231:~/manifests/mysql-master-slave# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-mysql-master-bbcd77749-x6ktb 1/1 Running 0 22s 10.100.2.43 worker233
deploy-mysql-slave-5cddb648d9-r7r5q 1/1 Running 1 (6s ago) 22s 10.100.2.42 worker233
root@master231:~/manifests/mysql-master-slave# kubectl exec -it deploy-mysql-master-bbcd77749-x6ktb — mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.36 MySQL Community Server – GPL
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql> show grants for ysl;
+————————————-+
| Grants for ysl@% |
+————————————-+
| GRANT USAGE ON *.* TO `ysl`@`%` |
+————————————-+
1 row in set (0.00 sec)
mysql> show master status\G;
*************************** 1. row ***************************
File: deploy-mysql-master-bbcd77749-x6ktb-bin.000003
Position: 157
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
ERROR:
No query specified
mysql> exit
Bye
root@master231:~/manifests/mysql-master-slave# kubectl exec -it deploy-mysql-slave-5cddb648d9-r7r5q — mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.36 MySQL Community Server – GPL
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql> CHANGE MASTER TO MASTER_HOST=’svc-master’,MASTER_USER=’ysl’,MASTER_PASSWORD=’ysl’,MASTER_PORT=3306,MASTER_LOG_FILE=’deploy-mysql-master-546dcd6d79-25jq2-bin.000003′,MASTER_LOG_POS=380,MASTER_CONNECT_RETRY=3;
Query OK, 0 rows affected, 10 warnings (0.04 sec)
mysql> CHANGE MASTER TO MASTER_HOST=’svc-master’,MASTER_USER=’ysl’,MASTER_PASSWORD=’ysl’,MASTER_PORT=3306,MASTER_LOG_FILE=’deploy-mysql-master-bbcd77749-x6ktb-bin.000003′,MASTERR_LOG_POS=380,MASTER_CONNE
Query OK, 0 rows affected, 10 warnings (0.02 sec)
mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.02 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting to reconnect after a failed registration on source
Master_Host: svc-master
Master_User: ysl
Master_Port: 3306
Connect_Retry: 3
Master_Log_File: deploy-mysql-master-bbcd77749-x6ktb-bin.000003
Read_Master_Log_Pos: 380
Relay_Log_File: deploy-mysql-slave-5cddb648d9-r7r5q-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: deploy-mysql-master-bbcd77749-x6ktb-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 380
Relay_Log_Space: 157
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 13120
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 111
Master_UUID: ea116814-1de1-11f0-8d0f-d60610b71a57
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp: 250420 20:24:01
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
Master_public_key_path:
Get_master_public_key: 0
Network_Namespace:
1 row in set, 1 warning (0.00 sec)
root@master231:~/manifests/mysql-master-slave# cat sts-mysql.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql80
data:
master.cnf: |
[mysqld]
server_id=111
skip-host-cache
skip-name-resolve
log-bin=mysql-bin
character-set-server=utf8mb4
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8mb4
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
slave.cnf: |
[mysqld]
server_id=222
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
#super-read-only
character-set-server=utf8mb4
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8mb4
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
—
apiVersion: v1
kind: Service
metadata:
name: headless-db
spec:
ports:
– port: 3306
clusterIP: None
selector:
app: db
—
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-mysql
spec:
selector:
matchLabels:
apps: db
serviceName: “headless-db”
replicas: 2
podManagementPolicy: OrderedReady
template:
spec:
initContainers:
– name: copy-conf
image: harbor.ysl.com/ysl-xiuxian/apps:v1
volumeMounts:
– name: conf
mountPath: /opt
– name: sub-conf
mountPath: /mnt
command:
– /bin/sh
– -c
– |
if [[ ${HOSTNAME} == sts-mysql-0 ]]
then
cp /opt/master.cnf /mnt/my.cnf
else
cp /opt/slave.cnf /mnt/my.cnf
fi
volumes:
– name: dt
hostPath:
path: /etc/localtime
– name: conf
configMap:
name: mysql80
– name: sub-conf
emptyDir: {}
#- name: master
# persistentVolumeClaim:
# claimName: pvc-master
containers:
– name: db
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/wordpress:mysql-8.0.36-oracle
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “yes”
– name: MYSQL_USER
value: test
– name: MYSQL_PASSWORD
value: yangsenlin
volumeMounts:
#- name: master
# mountPath: /var/lib/mysql
– name: sub-conf
mountPath: /root/
– name: dt
mountPath: /etc/localtime
args:
– –defaults-file=/root/my.cnf
– –character-set-server=utf8mb4
– –collation-server=utf8mb4_bin
– –default-authentication-plugin=mysql_native_password
metadata:
labels:
apps: db
root@master231:~/manifests/mysql-master-slave#
40 日志采集ElasticStack
1.什么是Operator
Operator是由CoreOS开发的,Operator是增强型的控制器(Controller),简单理解就是Operator = Controller + CRD。
Operator扩展了Kubernetes API的功能,并基于该扩展管理复杂应用程序。
– 1.Operator是Kubernetes的扩展软件, 它利用定制的资源类型来增强自动化管理应用及其组件的能力,从而扩展了集群的行为模式;
– 2.使用自定义资源(例如CRD)来管理应用程序及其组件;
– 3.将应用程序视为单个对象,并提供面向该应用程序的自动化管控操作(声明式管理),例如部署、配置、升级、备份、故障转移和灾难恢复等;
Operator也有很多镜像仓库,其中典型代表如上图所示。
https://operatorhub.io/
基于专用的Operator编排运行某个有状态应用的流程:
– 1.部署Operator及其专用的资源类型;
– 2.使用专用的资源类型,来声明一个有状态应用编排请求:
2 ECK(Elastic Cloud on Kubernetes)
2.1 创建自定义资源
root@master231:~/manifests/operator# wget https://download.elastic.co/downloads/eck/2.16.1/crds.yaml
2.2 常见operator及RBAC
root@master231:~/manifests/operator# wget https://download.elastic.co/downloads/eck/2.16.1/operator.yaml
root@master231:~/manifests/operator# ll
total 696
drwxr-xr-x 2 root root 4096 Apr 20 20:37 ./
drwxr-xr-x 28 root root 4096 Apr 20 20:35 ../
-rw-r–r– 1 root root 680855 Jan 18 10:35 crds.yaml
-rw-r–r– 1 root root 19777 Jan 18 10:35 operator.yaml
root@master231:~/manifests/operator# kubectl apply -f .
2.3 部署集群
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.17.2
nodeSets:
– name: default
count: 3
config:
node.store.allow_mmap: false
podTemplate:
metadata:
labels:
my.custom.domain/label: “label-value”
annotations:
my.custom.domain/annotation: “annotation-value”
spec:
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule
EOF
elasticsearch.elasticsearch.k8s.elastic.co/quickstart created
2.4 查找默认的用户名密码
[root@master231 operator]# kubectl get secrets quickstart-es-elastic-user -o jsonpath='{.data.elastic}’ | base64 -d |
more
1BV3oC4Rvmc388b85JrWz4c2
[root@master231 operator]#
2.5 查看集群状态
[root@master231 operator]# curl -u “elastic:1BV3oC4Rvmc388b85JrWz4c2” -k https://10.100.1.208:9200
{
“name” : “quickstart-es-default-0”,
“cluster_name” : “quickstart”,
“cluster_uuid” : “YG1f6FpuSMiNPhNr5WyUFg”,
“version” : {
“number” : “8.17.2”,
“build_flavor” : “default”,
“build_type” : “docker”,
“build_hash” : “747663ddda3421467150de0e4301e8d4bc636b0c”,
“build_date” : “2025-02-05T22:10:57.067596412Z”,
“build_snapshot” : false,
“lucene_version” : “9.12.0”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}
[root@master231 operator]#
[root@master231 operator]# curl -u “elastic:1BV3oC4Rvmc388b85JrWz4c2” -k https://10.100.1.208:9200/_cat/nodes
10.100.1.208 56 75 7 0.09 0.34 0.46 cdfhilmrstw * quickstart-es-default-0
10.100.2.253 24 72 4 0.28 0.27 0.33 cdfhilmrstw – quickstart-es-default-1
10.100.0.38 44 76 8 1.48 1.12 0.86 cdfhilmrstw – quickstart-es-default-2
[root@master231 operator]#
3.优化后案例
[root@master231 operator]# cat 01-casedemo-es.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: ysl-ysl
spec:
version: 8.17.2
nodeSets:
– name: default
count: 3
config:
node.store.allow_mmap: false
podTemplate:
spec:
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
– name: elasticsearch
env:
– name: ES_JAVA_OPTS
value: “-Xms512m -Xmx512m”
resources:
limits:
cpu: 1
memory: 1024Mi
requests:
cpu: 500m
memory: 512Mi
# volumeMounts:
# – name: elasticsearch-data
# mountPath: /var/lib/elasticsearch
# volumeClaimTemplates:
# – metadata:
# name: elasticsearch-data
# spec:
# accessModes:
# – ReadWriteOnce
# resources:
# requests:
# storage: 5Gi
# storageClassName: nfs-csi
[root@master231 operator]#
[root@master231 operator]# kubectl apply -f 01-casedemo-es.yaml
elasticsearch.elasticsearch.k8s.elastic.co/ysl-ysl created
[root@master231 operator]#
3.2 查看服务
[root@master231 operator]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ysl-ysl-es-default-0 1/1 Running 0 62s 10.100.1.212 worker232
ysl-ysl-es-default-1 1/1 Running 0 62s 10.100.2.4 worker233
ysl-ysl-es-default-2 1/1 Running 0 62s 10.100.0.41 master231
[root@master231 operator]#
[root@master231 operator]# kubectl get secrets ysl-ysl-es-elastic-user -o jsonpath='{.data.elastic}’ | base64 -d |
more
WZ8Lolyd9q268uLg55p655uc
[root@master231 operator]#
3.3 测试验证
[root@master231 operator]# curl -k -u “elastic:WZ8Lolyd9q268uLg55p655uc” https://10.200.93.152:9200
{
“name” : “ysl-ysl-es-default-2”,
“cluster_name” : “ysl-ysl”,
“cluster_uuid” : “YhYv6k_9ShKiji02RrN85A”,
“version” : {
“number” : “8.17.2”,
“build_flavor” : “default”,
“build_type” : “docker”,
“build_hash” : “747663ddda3421467150de0e4301e8d4bc636b0c”,
“build_date” : “2025-02-05T22:10:57.067596412Z”,
“build_snapshot” : false,
“lucene_version” : “9.12.0”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}
[root@master231 operator]#
[root@master231 operator]# curl -k -u “elastic:WZ8Lolyd9q268uLg55p655uc” https://10.200.93.152:9200/_cat/nodes
10.100.2.4 44 92 11 0.24 0.47 0.42 cdfhilmrstw – ysl-ysl-es-default-1
10.100.0.41 41 96 13 0.59 1.20 1.09 cdfhilmrstw * ysl-ysl-es-default-2
10.100.1.212 53 91 11 0.28 0.74 0.60 cdfhilmrstw – ysl-ysl-es-default-0
[root@master231 operator]#
参考链接:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
– 基于operator部署kibana实战
1.部署ES集群
[root@master231 operator]# curl -u “elastic:WZ8Lolyd9q268uLg55p655uc” -k https://10.200.93.152:9200/_cat/nodes
10.100.2.4 61 92 1 0.02 0.07 0.21 cdfhilmrstw – ysl-ysl-es-default-1
10.100.0.41 48 96 1 0.56 0.64 0.63 cdfhilmrstw * ysl-ysl-es-default-2
10.100.1.212 68 92 1 0.30 0.19 0.19 cdfhilmrstw – ysl-ysl-es-default-0
[root@master231 operator]#
[root@master231 operator]# kubectl get es
NAME HEALTH NODES VERSION PHASE AGE
ysl-ysl green 3 8.17.2 Ready 171m
[root@master231 operator]#
2.部署kibana
[root@master231 operator]# cat 02-casedemo-kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: ysl-ysl
spec:
version: 8.17.2
count: 1
elasticsearchRef:
name: ysl-ysl
config:
i18n.locale: zh-CN
—
apiVersion: v1
kind: Service
metadata:
name: svc-kibana
spec:
type: LoadBalancer
selector:
kibana.k8s.elastic.co/name: ysl-ysl
ports:
– port: 5601
[root@master231 operator]#
[root@master231 operator]# kubectl apply -f 02-casedemo-kibana.yaml
kibana.kibana.k8s.elastic.co/ysl-ysl created
service/svc-kibana created
[root@master231 operator]#
3.访问测试
[root@master231 operator]# kubectl get svc svc-kibana
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-kibana LoadBalancer 10.200.62.17 10.0.0.152 5601:30338/TCP 4m11s
[root@master231 operator]#
访问”https://10.0.0.152:5601″即可。
温馨提示:
如果要修改密码,
[root@master231 operator]# kubectl get secrets ysl-ysl-es-elastic-user -o jsonpath='{.data.elastic}’ | base64 -d ; echo
WZ8Lolyd9q268uLg55p655uc
[root@master231 operator]#
– 基于operator部署filebeat实战
1.编写资源清单
[root@master231 operator]# cat 03-casedemo-beats.yaml
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: ysl-ysl
spec:
type: filebeat
version: 8.17.2
elasticsearchRef:
name: ysl-ysl
config:
filebeat.inputs:
– type: container
paths:
– /var/log/containers/*.log
processors:
– add_kubernetes_metadata:
in_cluster: true
output.elasticsearch:
username: elastic
password: WZ8Lolyd9q268uLg55p655uc
index: “ysl-ysl-k8s-haha-%{+yyyy.MM.dd}”
setup.ilm.enabled: false
setup.template.name: ysl-ysl-k8s
setup.template.pattern: “ysl-ysl-k8s*”
setup.template.settings:
index.number_of_shards: 5
index.number_of_replicas: 0
setup.template.overwrite: false
daemonSet:
podTemplate:
spec:
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
– name: filebeat
volumeMounts:
– name: varlogcontainers
mountPath: /var/log/containers
– name: varlogpods
mountPath: /var/log/pods
– name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
– name: varlogcontainers
hostPath:
path: /var/log/containers
– name: varlogpods
hostPath:
path: /var/log/pods
– name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
[root@master231 operator]#
[root@master231 operator]# kubectl apply -f 03-casedemo-beats.yaml
beat.beat.k8s.elastic.co/ysl-ysl created
[root@master231 operator]#
镜像地址:
http://192.168.15.253/Resources/Kubernetes/Project/ElasticStack/v8.17.2/
– K8S日志采集数据到集群外部
1.k8s日志收集方案
– sidecar: 边车模式
可以在现有的业务容器注入一个Filebeat实例,和业务容器共享存储卷,从而进行日志采集。
优点:
– 1.理解起来比较简单,功能可以自定义注入的容器;
– 2.随着Pod数量的增多,一个节点可能有数百个容器,意味着要启动数个Filebeat;
– 使用ds各节点独占一个实例
在每个工作节点部署一个Filebeat实例,采集该节点的所有日志。
优点:
– 1.相当于sidecar更加节省资源;
– 2.需要创建sa和api-server通信,以获取当前节点pod信息;
– 业务人员编写日志采集的方案代码。
优点:
– 运维人员无需参与解决问题;
– 缺点就是实时难度较大,技术人员的能力水平不齐,可能会写的代码影响到正常业务;
2.编写资源清单
[root@master231 01-elasticstack]# cat 01-ep-svc-es.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: ysl-es7
subsets:
– addresses:
– ip: 10.0.0.91
– ip: 10.0.0.92
– ip: 10.0.0.93
ports:
– port: 9200
—
apiVersion: v1
kind: Service
metadata:
name: ysl-es7
spec:
type: ClusterIP
ports:
– port: 9200
targetPort: 9200
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]# cat 02-cm-filebeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.elasticsearch:
hosts: [‘https://ysl-es7:9200’]
# hosts: [‘ysl-es7.default.svc.ysl.com:9200’]
index: ‘ysl-k8s-ds-%{+yyyy.MM.dd}’
# 跳过证书校验,有效值为: full(default),strict,certificate,none
# 参考链接:
# https://www.elastic.co/guide/en/beats/filebeat/7.17/configuration-ssl.html#client-verification-mode
ssl.verification_mode: none
username: “elastic”
password: “123456”
# 配置索引模板
setup.ilm.enabled: false
setup.template.name: “ysl-k8s-ds”
setup.template.pattern: “ysl-k8s-ds*”
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 5
index.number_of_replicas: 0
—
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
data:
kubernetes.yml: |-
– type: docker
containers.ids:
– “*”
processors:
– add_kubernetes_metadata:
in_cluster: true
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]# cat 03-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
– kind: ServiceAccount
name: filebeat
namespace: default
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
– apiGroups: [“”]
resources:
– namespaces
– pods
– nodes
verbs:
– get
– watch
– list
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]# cat 04-ds-filebeat.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
– name: filebeat
# image: docker.elastic.co/beats/filebeat:7.17.26
image: harbor.ysl.com/ysl-elasitcstack/filebeat:7.17.26
args: [
“-c”, “/etc/filebeat.yml”,
“-e”,
]
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
– name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
– name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
– name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
– name: config
configMap:
defaultMode: 0600
name: filebeat-config
– name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
– name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]# kubectl apply -f .
endpoints/ysl-es7 created
service/ysl-es7 created
configmap/filebeat-config created
configmap/filebeat-inputs created
serviceaccount/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
daemonset.apps/filebeat created
[root@master231 01-elasticstack]#
[root@master231 01-elasticstack]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
filebeat-cshms 1/1 Running 0 22s 10.100.0.42 master231
filebeat-g5rtg 1/1 Running 0 22s 10.100.2.7 worker233
filebeat-xm5tw 1/1 Running 0 22s 10.100.1.214 worker232
[root@master231 01-elasticstack]#
41 探针
常用的探针(Probe):
livenessProbe:
健康状态检查,周期性检查服务是否存活,检查结果失败,将”重启”容器(删除源容器并重新创建新容器)。
如果容器没有提供健康状态检查,则默认状态为Success。
readinessProbe:
可用性检查,周期性检查服务是否可用,从而判断容器是否就绪。
若检测Pod服务不可用,会将Pod标记为未就绪状态,而svc的ep列表会将Addresses的地址移动到NotReadyAddresses列表。
若检测Pod服务可用,则ep会将Pod地址从NotReadyAddresses列表重新添加到Addresses列表中。
如果容器没有提供可用性检查,则默认状态为Success。
startupProbe: (1.16+之后的版本才支持)
如果提供了启动探针,则所有其他探针都会被禁用,直到此探针成功为止。
如果启动探测失败,kubelet将杀死容器,而容器依其重启策略进行重启。
如果容器没有提供启动探测,则默认状态为 Success。
对于starup探针是一次性检测,容器启动时进行检测,检测成功后,才会调用其他探针,且此探针不在生效。
探针(Probe)检测Pod服务方法:
exec:
执行一段命令,根据返回值判断执行结果。返回值为0或非0,有点类似于”echo $?”。
httpGet:
发起HTTP请求,根据返回的状态码来判断服务是否正常。
200: 返回状态码成功
301: 永久跳转
302: 临时跳转
401: 验证失败
403: 权限被拒绝
404: 文件找不到
413: 文件上传过大
500: 服务器内部错误
502: 无效的请求
504: 后端应用网关响应超时
…
tcpSocket:
测试某个TCP端口是否能够链接,类似于telnet,nc等测试工具。
grpc:
k8s 1.19+版本才支持,1.23依旧属于一个alpha阶段。
参考链接:
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe
1 livenessProbe探针值exec探测方式
root@master231:~/manifests/probe# cat 01-pods-livenessProbe-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-livenessprobe-exec-001
spec:
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
command:
– /bin/sh
– -c
– touch /tmp/ysl-linux-healthy; sleep 20; rm -f /tmp/ysl-linux-healthy; sleep 600
# 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
livenessProbe:
# 使用exec的方式去做健康检查
exec:
# 自定义检查的命令
command:
– cat
– /tmp/ysl-linux-healthy
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
initialDelaySeconds: 30
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
2 livenessProbe探针值httpGet探测方式
root@master231:~/manifests/probe# cat 02-pods-livenessProbe-httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
name: livenessprobe-httpget-001
spec:
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
# 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
livenessProbe:
# 使用httpGet的方式去做健康检查
httpGet:
# 指定访问的端口号
port: 80
# 检测指定的访问路径
path: /index.html
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
root@master231:~/manifests/probe#
3 livenessProbe探针值tcpSocket探测方式
[root@master231 probe]# cat 03-pods-livenessProbe-tcpsocket.yaml
apiVersion: v1
kind: Pod
metadata:
name: livenessprobe-tcpsocket-001
spec:
containers:
– image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
name: c1
command:
– /bin/sh
– -c
– nginx; sleep 10; nginx -s stop; sleep 600
#健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器
livenessProbe:
#使用tcpsocket的方式做健康检查
tcpSocket:
port: 80
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
4 livenessProbe探针之grpc探测方式
[root@master231 probe]# cat 04-pods-livenessProbe-grpc.yaml
apiVersion: v1
kind: Pod
metadata:
name: livenessprobe-grpc
spec:
restartPolicy: Always
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/etcd:3.5.10
name: web
imagePullPolicy: IfNotPresent
command:
– /opt/bitnami/etcd/bin/etcd
– –data-dir=/tmp/etcd
– –listen-client-urls=http://0.0.0.0:2379
– –advertise-client-urls=http://127.0.0.1:2379
– –log-level=debug
ports:
– containerPort: 2379
livenessProbe:
# 对grpc端口发起grpc调用,目前属于alpha测试阶段,如果真的想要使用,请在更高版本关注,比如k8s 1.24+
# 在1.23.17版本中,如果检测失败,会触发警告,但不会重启容器只是会有警告事件。
grpc:
port: 2379
# 指定服务,但是服务名称我是瞎写的,实际工作中会有开发告诉你
service: /health
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
5 readlinessProbe探针exec探测方式
root@master231:~/manifests/probe# cat 05-deploy-svc-readinessprobe-livenessProbe-exec.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian
spec:
replicas: 3
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
command:
– /bin/sh
– -c
– touch /tmp/ysl-linux-healthy; sleep 60; rm -f /tmp/ysl-linux-healthy; sleep 600
livenessProbe:
exec:
command:
– cat
– /tmp/ysl-linux-healthy
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用exec的方式去做健康检查
exec:
# 自定义检查的命令
command:
– cat
– /tmp/ysl-linux-healthy
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
—
apiVersion: v1
kind: Service
metadata:
name: ysl-readinessprobe-exec
spec:
selector:
apps: v1
ports:
– port: 80
targetPort: 80
protocol: TCP
root@master231:~/manifests/probe#
6 readinessProbe探针httpGet探测方式
root@master231:~/manifests/probe# cat 06-deploy-svc-readinessProbe-livenessProbe-httpGet.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian
spec:
replicas: 3
selector:
matchLabels:
apps: v2
template:
metadata:
labels:
apps: v2
spec:
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
name: c1
command:
– /bin/sh
– -c
– touch /tmp/ysl-linux-healthy; sleep 30; rm -f /tmp/ysl-linux-healthy; sleep 600
livenessProbe:
exec:
command:
– cat
– /tmp/ysl-linux-healthy
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用httpGet的方式去做健康检查
httpGet:
# 指定访问的端口号
port: 80
path: /index.html
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
—
apiVersion: v1
kind: Service
metadata:
name: ysl-readinessprobe-httpget
spec:
clusterIP: 10.200.0.11
selector:
apps: v2
ports:
– port: 80
targetPort: 80
protocol: TCP
root@master231:~/manifests/probe#
测试方式:
[root@master231 ~]# while true;do curl 10.200.0.22;sleep 0.5;done
7 readinessProbe探针之tcpSocket探测方式
root@master231:~/manifests/probe# cat 07-deploy-svc-readinessProbe-livenessProbe-tcpSocket.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian
spec:
replicas: 3
selector:
matchLabels:
apps: v2
template:
metadata:
labels:
apps: v2
spec:
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
name: c1
command:
– /bin/sh
– -c
– |
touch /tmp/ysl-linux-healthy
sleep 10
rm -f /tmp/ysl-linux-healthy
nginx
tail -f /etc/hosts
livenessProbe:
exec:
command:
– cat
– /tmp/ysl-linux-healthy
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用tcpSocket的方式去做健康检查
tcpSocket:
# 探测80端口是否存活
port: 80
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
—
apiVersion: v1
kind: Service
metadata:
name: ysl-readinessprobe-tcpsocket
spec:
clusterIP: 10.200.0.22
selector:
apps: v2
ports:
– port: 80
targetPort: 80
protocol: TCP
root@master231:~/manifests/probe#
测试方式:
[root@master231 ~]# while true;do curl 10.200.0.22;sleep 0.5;done
8 启动探针startupProbe
root@master231:~/manifests/probe# cat 08-startupProbe-httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
name: startupprobe-httpget-01
spec:
volumes:
– name: data
emptyDir: {}
initContainers:
– name: init01
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /ysl
command:
– /bin/sh
– -c
– echo “liveness probe test page” >> /ysl/huozhe.html
– name: init02
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
volumeMounts:
– name: data
mountPath: /ysl
command:
– /bin/sh
– -c
– echo “readiness probe test page” >> /ysl/ysl.html
– name: init03
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v3
volumeMounts:
– name: data
mountPath: /ysl
command:
– /bin/sh
– -c
– echo “startup probe test page” >> /ysl/start.html
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
# 判断服务是否健康,若检查不通过,将Pod直接重启。
livenessProbe:
httpGet:
port: 80
path: /huozhe.html
failureThreshold: 3
initialDelaySeconds: 5
#initialDelaySeconds: 120
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
# 判断服务是否就绪,若检查不通过,将Pod标记为未就绪状态。
readinessProbe:
httpGet:
port: 80
path: /ysl.html
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
# 启动时做检查,若检查不通过,直接杀死容器。并进行重启!
# startupProbe探针通过后才回去执行readinessProbe和livenessProbe哟~
startupProbe:
httpGet:
port: 80
path: /start.html
failureThreshold: 3
# 尽管上面的readinessProbe和livenessProbe数据已经就绪,但必须等待startupProbe的检测成功后才能执行。
initialDelaySeconds: 35
# initialDelaySeconds: 15
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
root@master231:~/manifests/probe#
42 静态Pod
1 静态Pod概述
所谓的静态pod就是kubelet自己监视的一个目录,如果这个目录有Pod资源清单,就直接会在当前节点上创建该Pod。
也就是说不基于APIServer就可以直接创建Pod,静态Pod仅对Pod类型的资源有效,其他资源无视。
静态Pod创建的资源,后缀都会加一个当前节点的名称。
2 查看kubelet配置默认静态目录
root@master231:~/manifests/probe# grep staticPodPath /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
root@master231:~/manifests/probe#
root@master231:~/manifests/probe# ll /etc/kubernetes/manifests
total 24
drwxr-xr-x 2 root root 4096 Apr 4 19:30 ./
drwxr-xr-x 4 root root 4096 Apr 4 19:30 ../
-rw——- 1 root root 2280 Apr 4 19:30 etcd.yaml
-rw——- 1 root root 4026 Apr 4 19:30 kube-apiserver.yaml
-rw——- 1 root root 3546 Apr 4 19:30 kube-controller-manager.yaml
-rw——- 1 root root 1465 Apr 4 19:30 kube-scheduler.yaml
3 修改nodePort默认端口
默认是30000-30767
修改端口范围
root@master231:~# grep -n service-node-port-range /etc/kubernetes/manifests/kube-apiserver.yaml
17: – –service-node-port-range=3000-50000
root@master231:~#
3 快速重建api-server
root@master231:~# cd /etc/kubernetes/manifests/
root@master231:/etc/kubernetes/manifests# mv kube-apiserver.yaml /opt/
root@master231:/etc/kubernetes/manifests# mv /opt/kube-apiserver.yaml ./
root@master231:/etc/kubernetes/manifests#
root@master231:/etc/kubernetes/manifests# kubectl -n kube-system get pods kube-apiserver-master231
NAME READY STATUS RESTARTS AGE
kube-apiserver-master231 1/1 Running 1 (31s ago) 2m2s
43 容器的生命周期postStart和preStop
root@master231:/manifests/pods# cat 24-pod-lifecycle-postStart-preStop.yaml
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-poststart-prestop-001
spec:
volumes:
– name: data
hostPath:
path: /ysl-linux
# 在pod优雅终止时,定义延迟发送kill信号的时间,此时间可用于pod处理完未处理的请求等状况。
# 默认单位是秒,若不设置默认值为30s。
# terminationGracePeriodSeconds: 60
terminationGracePeriodSeconds: 3
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /data
# 定义容器的生命周期。
lifecycle:
# 容器启动之后做的事情
postStart:
exec:
command:
– “/bin/sh”
– “-c”
– “sleep 30;echo \”postStart at $(date +%F_%T)\” >> /data/postStart.log”
# 容器停止之前做的事情
preStop:
exec:
command:
– “/bin/sh”
– “-c”
– “sleep 20;echo \”preStop at $(date +%F_%T)\” >> /data/preStop.log”
root@master231:/manifests/pods#
44 kubelet启动容器的原理
1 kubelet创建Pod的全流程:
– 1.kubelet调用CRI接口创建容器,底层支持docker|containerd作为容器运行时;
– 2.底层基于runc(符合OCI规范)创建容器:
– 3.优先创建pause基础镜像;
– 4.创建初始化容器
– 5.业务容器,业务容器如果定义了优雅终止,探针则顺序如下:
– 5.1 启动命令【COMMAND】
– 5.2 启动postStart;
– 5.3 Probe
– StartupProbe
– LivenessProbe | readinessProbe
– 5.4 启动PreStop
2 测试案例
root@master231:/manifests/pods# cat 25-startkubelet-workflow.yaml
apiVersion: v1
kind: Pod
metadata:
name: pods-workflow-001
spec:
volumes:
– name: data
hostPath:
path: /ysl-shaonao
initContainers:
– name: init01
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
volumeMounts:
– name: data
mountPath: /ysl
command:
– “/bin/sh”
– “-c”
– “echo \”initContainer at $(date +%F_%T)\” > /ysl/haha.log”
terminationGracePeriodSeconds: 3
containers:
– name: c1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v3
command:
– /bin/sh
– -c
– “echo \”command at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log; sleep 600″
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
– “/bin/sh”
– “-c”
– “echo \”livenessProbe at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log”
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
exec:
command:
– “/bin/sh”
– “-c”
– “echo \”readinessProbe at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log”
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
startupProbe:
exec:
command:
– “/bin/sh”
– “-c”
– “echo \”startupProbe at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log”
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
lifecycle:
postStart:
exec:
command:
– “/bin/sh”
– “-c”
– “sleep 10;echo \”postStart at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log”
preStop:
exec:
command:
– “/bin/sh”
– “-c”
– “echo \”preStop at $(date +%F_%T)\” >> /usr/share/nginx/html/haha.log;sleep 30″
root@master231:/manifests/pods#
45 deploy的升级策略
root@master231:/manifests/deployments# cat 03-deploy-startegy-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-strategy
labels:
school: ysl
spec:
replicas: 5
# 定义升级策略
strategy:
# 指定更新策略的类型,有效值为: “Recreate” or “RollingUpdate”
# Recreate:
# 表示批量删除旧的Pod,重新创建新的Pod。
# RollingUpdate: (default)
# 滚动更新Pod,并不会删除所有的旧Pod。
# type: Recreate
type: RollingUpdate
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
address: shahe
class: ysl
spec:
containers:
– name: c1
# image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v22
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian
spec:
type: ClusterIP
selector:
apps: xiuxian
ports:
– port: 80
root@master231:/manifests/deployments#
root@master231:/manifests/deployments# cat 04-deploy-strategy-RollingUpdate.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-strategy
labels:
school: ysl
spec:
# replicas: 4
replicas: 5
strategy:
type: RollingUpdate
# 配置滚动更新的策略
rollingUpdate:
# 升级过程中,在原有Pod数量之上,多出来的百分比或数字。
# maxSurge: 25%
# 支持数字和百分比,此处我使用的是数字。
maxSurge: 2
# 升级过程中,最多不可用的Pod数量百分比,
# maxUnavailable: 25%
maxUnavailable: 1
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
address: shahe
class: ysl
spec:
containers:
– name: c1
# image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v3
imagePullPolicy: Always
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian
spec:
type: ClusterIP
selector:
apps: xiuxian
ports:
– port: 80
root@master231:/manifests/deployments#
46 基于operator部署prometheus实现k8s监控
1.下载源代码
wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.11.0.tar.gz
2.解压目录
[root@master231 02-prometheus]# tar xf kube-prometheus-0.11.0.tar.gz
[root@master231 02-prometheus]#
[root@master231 02-prometheus]# cd kube-prometheus-0.11.0/
[root@master231 kube-prometheus-0.11.0]#
3.安装Prometheus-Operator
kubectl apply –server-side -f manifests/setup
kubectl wait \
–for condition=Established \
–all CustomResourceDefinition \
–namespace=monitoring
kubectl apply -f manifests/
4.检查Prometheus是否部署成功
[root@master231 kube-prometheus-0.11.0]# kubectl get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
alertmanager-main-0 2/2 Running 0 3m34s 10.100.2.93 worker233
alertmanager-main-1 2/2 Running 0 3m34s 10.100.1.11 worker232
alertmanager-main-2 2/2 Running 0 3m34s 10.100.2.94 worker233
blackbox-exporter-746c64fd88-g7w5k 3/3 Running 0 4m40s 10.100.2.89 worker233
grafana-5fc7f9f55d-zk5tk 1/1 Running 0 4m39s 10.100.2.91 worker233
kube-state-metrics-6c8846558c-wbs66 3/3 Running 0 4m39s 10.100.2.90 worker233
node-exporter-mttjn 2/2 Running 0 4m39s 10.0.0.232 worker232
node-exporter-szr25 2/2 Running 0 4m39s 10.0.0.231 master231
node-exporter-wdkjk 2/2 Running 0 4m39s 10.0.0.233 worker233
prometheus-adapter-6455646bdc-m9qwv 1/1 Running 0 4m38s 10.100.2.92 worker233
prometheus-adapter-6455646bdc-wbjqd 1/1 Running 0 4m38s 10.100.1.9 worker232
prometheus-k8s-0 2/2 Running 0 3m33s 10.100.1.12 worker232
prometheus-k8s-1 1/2 Running 0 3m33s 10.100.2.95 worker233
prometheus-operator-f59c8b954-ttjth 2/2 Running 0 4m38s 10.100.1.10 worker232
[root@master231 kube-prometheus-0.11.0]#
5.修改Grafana的svc
[root@master231 kube-prometheus-0.11.0]# cat manifests/grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
…
name: grafana
namespace: monitoring
spec:
type: LoadBalancer
…
[root@master231 kube-prometheus-0.11.0]#
[root@master231 kube-prometheus-0.11.0]# kubectl apply -f manifests/grafana-service.yaml
service/grafana configured
[root@master231 kube-prometheus-0.11.0]#
[root@master231 kube-prometheus-0.11.0]# kubectl get -f manifests/grafana-service.yaml
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.200.231.129 10.0.0.150 3000:37754/TCP 7m23s
[root@master231 kube-prometheus-0.11.0]#
6.访问Grafana的WebUI
http://10.0.0.150:3000/
默认的用户名和密码: admin
– 暴露Prometheus的服务
1.基于NodePort方式暴露
[root@master231 kube-prometheus-0.11.0]# cat manifests/prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
…
name: prometheus-k8s
namespace: monitoring
spec:
type: NodePort
ports:
– name: web
port: 9090
nodePort: 9090
targetPort: web
…
[root@master231 kube-prometheus-0.11.0]#
[root@master231 kube-prometheus-0.11.0]# kubectl apply -f manifests/prometheus-service.yaml
service/prometheus-k8s configured
[root@master231 kube-prometheus-0.11.0]#
[root@master231 kube-prometheus-0.11.0]# kubectl get -f manifests/prometheus-service.yaml
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-k8s NodePort 10.200.246.30
[root@master231 kube-prometheus-0.11.0]#
2.基于LoadBalancer
略。
3.基于端口转发
[root@master231 ~]# kubectl port-forward sts/prometheus-k8s 19090:9090 -n monitoring –address=0.0.0.0
Forwarding from 0.0.0.0:19090 -> 9090
Handling connection for 19090
Handling connection for 19090
– Prometheus监控云原生应用etcd案例
1.测试ectd metrics接口
1.1 查看etcd证书存储路径
[root@master231 yangsenlin]# egrep “\–key-file|–cert-file” /etc/kubernetes/manifests/etcd.yaml
– –cert-file=/etc/kubernetes/pki/etcd/server.crt
– –key-file=/etc/kubernetes/pki/etcd/server.key
[root@master231 yangsenlin]#
1.2 测试etcd证书访问的metrics接口
[root@master231 yangsenlin]# curl -s –cert /etc/kubernetes/pki/etcd/server.crt –key /etc/kubernetes/pki/etcd/server.key https://10.0.0.231:2379/metrics -k | tail
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code=”200″} 4
promhttp_metric_handler_requests_total{code=”500″} 0
promhttp_metric_handler_requests_total{code=”503″} 0
[root@master231 yangsenlin]#
2.创建etcd证书的secrets并挂载到Prometheus server
2.1 查找需要挂载etcd的证书文件路径
[root@master231 yangsenlin]# egrep “\–key-file|–cert-file|–trusted-ca-file” /etc/kubernetes/manifests/etcd.yaml
– –cert-file=/etc/kubernetes/pki/etcd/server.crt
– –key-file=/etc/kubernetes/pki/etcd/server.key
– –trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
[root@master231 yangsenlin]#
2.2 根据etcd的实际存储路径创建secrets
[root@master231 yangsenlin]# kubectl create secret generic etcd-tls –from-file=/etc/kubernetes/pki/etcd/server.crt –from-file=/etc/kubernetes/pki/etcd/server.key –from-file=/etc/kubernetes/pki/etcd/ca.crt -n monitoring
secret/etcd-tls created
[root@master231 yangsenlin]#
[root@master231 yangsenlin]# kubectl -n monitoring get secrets etcd-tls
NAME TYPE DATA AGE
etcd-tls Opaque 3 12s
[root@master231 yangsenlin]#
2.3 修改Prometheus的资源,修改后会自动重启
[root@master231 yangsenlin]# kubectl -n monitoring edit prometheus k8s
…
spec:
secrets:
– etcd-tls
…
[root@master231 yangsenlin]# kubectl -n monitoring get pods -l app.kubernetes.io/component=prometheus -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
prometheus-k8s-0 2/2 Running 0 74s 10.100.1.57 worker232
prometheus-k8s-1 2/2 Running 0 92s 10.100.2.28 worker233
[root@master231 yangsenlin]#
2.4.查看证书是否挂载成功
[root@master231 yangsenlin]# kubectl -n monitoring exec prometheus-k8s-0 -c prometheus — ls -l /etc/prometheus/secrets/etcd-tls
total 0
lrwxrwxrwx 1 root 2000 13 Jan 24 14:07 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root 2000 17 Jan 24 14:07 server.crt -> ..data/server.crt
lrwxrwxrwx 1 root 2000 17 Jan 24 14:07 server.key -> ..data/server.key
[root@master231 yangsenlin]#
[root@master231 yangsenlin]# kubectl -n monitoring exec prometheus-k8s-1 -c prometheus — ls -l /etc/prometheus/secrets/etcd-tls
total 0
lrwxrwxrwx 1 root 2000 13 Jan 24 14:07 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root 2000 17 Jan 24 14:07 server.crt -> ..data/server.crt
lrwxrwxrwx 1 root 2000 17 Jan 24 14:07 server.key -> ..data/server.key
[root@master231 yangsenlin]#
3.编写资源清单
[root@master231 servicemonitors]# cat 01-smon-etcd.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: etcd-k8s
namespace: kube-system
subsets:
– addresses:
– ip: 10.0.0.231
ports:
– name: https-metrics
port: 2379
protocol: TCP
—
apiVersion: v1
kind: Service
metadata:
name: etcd-k8s
namespace: kube-system
labels:
apps: etcd
spec:
ports:
– name: https-metrics
port: 2379
targetPort: 2379
type: ClusterIP
—
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: ysl-etcd-smon
namespace: monitoring
spec:
# 指定job的标签,可以不设置。
jobLabel: kubeadm-etcd-k8s-yangsenlin
# 指定监控后端目标的策略
endpoints:
# 监控数据抓取的时间间隔
– interval: 30s
# 指定metrics端口,这个port对应Services.spec.ports.name
port: https-metrics
# Metrics接口路径
path: /metrics
# Metrics接口的协议
scheme: https
# 指定用于连接etcd的证书文件
tlsConfig:
# 指定etcd的CA的证书文件
caFile: /etc/prometheus/secrets/etcd-tls/ca.crt
# 指定etcd的证书文件
certFile: /etc/prometheus/secrets/etcd-tls/server.crt
# 指定etcd的私钥文件
keyFile: /etc/prometheus/secrets/etcd-tls/server.key
# 关闭证书校验,毕竟咱们是自建的证书,而非官方授权的证书文件。
insecureSkipVerify: true
# 监控目标Service所在的命名空间
namespaceSelector:
matchNames:
– kube-system
# 监控目标Service目标的标签。
selector:
# 注意,这个标签要和etcd的service的标签保持一致哟
matchLabels:
apps: etcd
[root@master231 servicemonitors]#
4.Prometheus查看数据
etcd_cluster_version
5.Grafana导入模板
3070
– Prometheus监控非云原生应用MySQL案例
1.编写资源清单
[root@master231 servicemonitors]# cat 02-smon-mysqld.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql80-deployment
spec:
replicas: 1
selector:
matchLabels:
apps: mysql80
template:
metadata:
labels:
apps: mysql80
spec:
containers:
– name: mysql
image: harbor.ysl.com/ysl-wordpress/mysql:8.0.36-oracle
ports:
– containerPort: 3306
env:
– name: MYSQL_ROOT_PASSWORD
value: yangsenlin
– name: MYSQL_USER
value: ysl
– name: MYSQL_PASSWORD
value: “ysl”
—
apiVersion: v1
kind: Service
metadata:
name: mysql80-service
spec:
selector:
apps: mysql80
ports:
– protocol: TCP
port: 3306
targetPort: 3306
—
apiVersion: v1
kind: ConfigMap
metadata:
name: my.cnf
data:
.my.cnf: |-
[client]
user = ysl
password = ysl
[client.servers]
user = ysl
password = ysl
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-exporter-deployment
spec:
replicas: 1
selector:
matchLabels:
apps: mysql-exporter
template:
metadata:
labels:
apps: mysql-exporter
spec:
volumes:
– name: data
configMap:
name: my.cnf
items:
– key: .my.cnf
path: .my.cnf
containers:
– name: mysql-exporter
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/mysqld-exporter:v0.15.1
command:
– mysqld_exporter
– –config.my-cnf=/root/my.cnf
– –mysqld.address=mysql80-service.default.svc.ysl.com:3306
securityContext:
runAsUser: 0
ports:
– containerPort: 9104
#env:
#- name: DATA_SOURCE_NAME
# value: mysql_exporter:yangsenlin@(mysql80-service.default.svc.yangsenlin.com:3306)
volumeMounts:
– name: data
mountPath: /root/my.cnf
subPath: .my.cnf
—
apiVersion: v1
kind: Service
metadata:
name: mysql-exporter-service
labels:
apps: mysqld
spec:
selector:
apps: mysql-exporter
ports:
– protocol: TCP
port: 9104
targetPort: 9104
name: mysql80
—
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: ysl-mysql-smon
spec:
jobLabel: kubeadm-mysql-k8s-yangsenlin
endpoints:
– interval: 3s
# 这里的端口可以写svc的端口号,也可以写svc的名称。
# 但我推荐写svc端口名称,这样svc就算修改了端口号,只要不修改svc端口的名称,那么我们此处就不用再次修改哟。
# port: 9104
port: mysql80
path: /metrics
scheme: http
namespaceSelector:
matchNames:
– default
selector:
matchLabels:
apps: mysqld
[root@master231 servicemonitors]#
2.Prometheus访问测试
mysql_up
3.Grafana导入模板
7362
47 helm
1 概述
helm优点类似于Linux的yum,apt工具,帮助我们管理K8S集群的资源清单。
Helm 帮助您管理 Kubernetes 应用—— Helm Chart,即使是最复杂的 Kubernetes 应用程序,都可以帮助您定义,安装和升级。
Helm Chart 易于创建、发版、分享和发布,所以停止复制粘贴,开始使用 Helm 吧。
Helm 是 CNCF 的毕业项目,由 Helm 社区维护。
官方文档:
https://helm.sh/zh/
2 安装helm
2.1 下载helm并解压
root@master231:~# wget https://get.helm.sh/helm-v3.17.1-linux-amd64.tar.gz
root@master231:~# tar xf helm-v3.17.1-linux-amd64.tar.gz -C /usr/local/bin linux-amd64/helm –strip-components=1
root@master231:~#
2.2 配置helm自动补全
root@master231:~# helm completion bash > /etc/bash_completion.d/helm
root@master231:~# source /etc/bash_completion.d/helm
root@master231:~# echo ‘source /etc/bash_completion.d/helm’ >> ~/.bashrc
3 helm的基本管理
3.1 创建chart
root@master231:~/manifests# mkdir helm
root@master231:~/manifests# cd helm/
root@master231:~/manifests/helm# helm create test-helm
Creating test-helm
3.2 查看chart
root@master231:~/manifests/helm# tree test-helm
test-helm
├── charts #包含chart依赖的其他chart
├── Chart.yaml #包含了chart信息的YAML文件
├── templates #模板目录,当和values结合时,可生成有效的kubernetes manifect文件
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt #可选,包含简要使用说明的纯文本文件
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml #默认的配置值
参考链接:
https://helm.sh/zh/docs/topics/charts/#chart-%E6%96%87%E4%BB%B6%E7%BB%93%E6%9E%84
3.3 修改默认的values.yaml
root@master231:~/manifests/helm# sed -i “/repository\:/s#nginx#crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps#” test-helm/values.yaml
root@master231:~/manifests/helm# grep “repository:” test-helm/values.yaml
repository: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps
root@master231:~/manifests/helm# sed -ri ‘/tag\:/s#tag: “”#tag: v1#’ test-helm/values.yaml root@master231:~/manifests/helm# grep “tag:” test-helm/values.yaml
tag: v1
3.4 基于chart安装服务发行Release
root@master231:~/manifests/helm# helm install myapp test-helm
3.5 查看服务
root@master231:~/manifests/helm/test-helm/templates# helm list
root@master231:~/manifests/helm# kubectl get deploy,svc,pods
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/myapp-test-helm 1/1 1 1 5m46s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/harbor-mysql ClusterIP 10.200.67.2
service/headless-apps ClusterIP None
service/kubernetes ClusterIP 10.200.0.1
service/myapp-test-helm ClusterIP 10.200.247.46
service/quickstart-es-default ClusterIP None
service/quickstart-es-http ClusterIP 10.200.57.152
service/quickstart-es-internal-http ClusterIP 10.200.69.164
service/quickstart-es-transport ClusterIP None
service/svc-headless ClusterIP None
service/svc-xiuxian ClusterIP 10.200.224.60
service/svc-xiuxian-externalname ExternalName
service/svc-xiuxian-loadbalancer LoadBalancer 10.200.0.200 10.0.0.150 81:30081/TCP 6d21h
service/svc-xiuxian-nodeport NodePort 10.200.0.100
NAME READY STATUS RESTARTS AGE
pod/myapp-test-helm-7d5b86dcbf-nklhf 1/1 Running 0 5m46s
pod/quickstart-es-default-0 1/1 Running 1 (96m ago) 2d20h
pod/quickstart-es-default-1 1/1 Running 1 (80m ago) 2d20h
pod/quickstart-es-default-2 1/1 Running 1 (80m ago) 2d20h
root@master231:~/manifests/helm#
root@master231:~/manifests/helm# curl 10.200.247.46
3.6 卸载服务
root@master231:~/manifests/helm# helm uninstall myapp
root@master231:~/manifests/helm# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
root@master231:~/manifests/helm#
3.7 helm架构版本选择
2019年11月Helm团队发布V3版本,相比v2版本最大变化是将Tiller删除,并大部分代码重构。
helm v3相比helm v2还做了很多优化,比如不同命名空间资源同名的情况在v3版本是允许的,我们在生产环境中使用建议大家使用v3版本,不仅仅是因为它版本功能较强,而且相对来说也更加稳定了。
官方地址:
https://helm.sh/docs/intro/install/
github地址:
https://github.com/helm/helm/releases
4 helm的两种升级方式
4.1 安装旧的服务
root@master231:~/manifests/helm# helm install myapp01 test-helm/
root@master231:~/manifests/helm# kubectl get deploy,svc,pods
root@master231:~/manifests/helm# kubectl get svc myapp01-test-helm
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp01-test-helm ClusterIP 10.200.216.101
root@master231:~/manifests/helm# curl 10.200.216.101
4.2 修改要升级的相关参数
root@master231:~/manifests/helm# sed -i ‘/replicaCount/s#1#3#’ test-helm/values.yaml
root@master231:~/manifests/helm# grep replicaCount test-helm/values.yaml
replicaCount: 3
root@master231:~/manifests/helm# sed -i “/tag:/s#v1#v2#” test-helm/values.yaml
root@master231:~/manifests/helm# grep “tag:” test-helm/values.yaml
tag: v2
4.3 基于文件方式升级
root@master231:~/manifests/helm# helm upgrade myapp01 -f test-helm/values.yaml test-helm
4.4 结果验证
root@master231:~/manifests/helm# kubectl get deploy,svc,pods
root@master231:~/manifests/helm# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myapp01 default 2 2025-04-24 12:52:39.161470213 +0800 CST deployed test-helm-0.1.0 1.16.0
root@master231:~/manifests/helm#
4.5 基于环境变量方式升级
root@master231:~/manifests/helm# helm upgrade myapp01 –set replicaCount=5,image.tag=v3 test-helm
4.6 再次验证升级效果
root@master231:~/manifests/helm# kubectl get deploy,svc,pods
root@master231:~/manifests/helm# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myapp01 default 3 2025-04-24 12:55:34.585170484 +0800 CST deployed test-helm-0.1.0 1.16.0
root@master231:~/manifests/helm#
5 helm回滚
5.1 查看Release历史版本
root@master231:~/manifests/helm# helm history myapp01
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Apr 24 12:31:37 2025 superseded test-helm-0.1.0 1.16.0 Install complete
2 Thu Apr 24 12:52:39 2025 superseded test-helm-0.1.0 1.16.0 Upgrade complete
3 Thu Apr 24 12:55:34 2025 deployed test-helm-0.1.0 1.16.0 Upgrade complete
root@master231:~/manifests/helm#
5.2 回滚到上一个版本
root@master231:~/manifests/helm# helm rollback myapp01
root@master231:~/manifests/helm# helm history myapp01
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Apr 24 12:31:37 2025 superseded test-helm-0.1.0 1.16.0 Install complete
2 Thu Apr 24 12:52:39 2025 superseded test-helm-0.1.0 1.16.0 Upgrade complete
3 Thu Apr 24 12:55:34 2025 superseded test-helm-0.1.0 1.16.0 Upgrade complete
4 Thu Apr 24 12:59:40 2025 deployed test-helm-0.1.0 1.16.0 Rollback to 2
5.3 回滚到指定版本
root@master231:~/manifests/helm# helm rollback myapp01 1
root@master231:~/manifests/helm# helm history myapp01
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Apr 24 12:31:37 2025 superseded test-helm-0.1.0 1.16.0 Install complete
2 Thu Apr 24 12:52:39 2025 superseded test-helm-0.1.0 1.16.0 Upgrade complete
3 Thu Apr 24 12:55:34 2025 superseded test-helm-0.1.0 1.16.0 Upgrade complete
4 Thu Apr 24 12:59:40 2025 superseded test-helm-0.1.0 1.16.0 Rollback to 2
5 Thu Apr 24 17:49:27 2025 deployed test-helm-0.1.0 1.16.0 Rollback to 1
6 helm公有仓库管理及es-exporter环境部署
6.1 主流的Chart仓库概述
互联网公开Chart仓库,可以直接使用他们制作好的Chart包:
微软仓库:
http://mirror.azure.cn/kubernetes/charts/
阿里云仓库:
https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
6.2 添加公有仓库
root@master231:~/manifests/helm# helm repo add azure http://mirror.azure.cn/kubernetes/charts/
“azure” has been added to your repositories
root@master231:~/manifests/helm# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
“aliyun” has been added to your repositories
root@master231:~/manifests/helm#
6.3 查看本地的仓库列表
root@master231:~/manifests/helm# helm repo list
NAME URL
azure http://mirror.azure.cn/kubernetes/charts/
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
root@master231:~/manifests/helm#
6.4 更新本地仓库
root@master231:~/manifests/helm# helm repo update
Hang tight while we grab the latest from your chart repositories…
…Successfully got an update from the “aliyun” chart repository
…Successfully got an update from the “azure” chart repository
Update Complete. ⎈Happy Helming!⎈
root@master231:~/manifests/helm#
6.5 搜索Chart
root@master231:~/manifests/helm# helm search repo elasticsearch
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/elasticsearch-exporter 0.1.2 1.0.2 Elasticsearch stats exporter for Prometheus
azure/elasticsearch 1.32.5 6.8.6 DEPRECATED Flexible and powerful open source, d…
azure/elasticsearch-curator 2.2.3 5.7.6 DEPRECATED A Helm chart for Elasticsearch Curator
…
6.6 查看Chart的详细信息
root@master231:~/manifests/helm# helm show chart aliyun/elasticsearch-exporter
6.7 查看指定版本chart
root@master231:~/manifests/helm# helm show chart aliyun/elasticsearch-exporter –version 0.1.1
6.8 拉取chart
root@master231:~/manifests/helm# helm pull aliyun/elasticsearch-exporter
root@master231:~/manifests/helm# helm pull aliyun/elasticsearch-exporter –version 0.1.1
root@master231:~/manifests/helm# ls
elasticsearch-exporter-0.1.1.tgz elasticsearch-exporter-0.1.2.tgz test-helm
root@master231:~/manifests/helm#
6.9 部署拉取的chart
略
6.10 删除第三方仓库
root@master231:~/manifests/helm# helm repo remove azure
root@master231:~/manifests/helm# helm repo list
7 基于helm部署Ingress-nginx
7.1 Ingress-Nginx概述
Ingress-Nginx是K8S官方写的一个Ingress Controller,而”nginx-Ingress”是Nginx官方写的资源清单。
注意,部署时要观察对比一下K8S和Ingress-Nginx对应的版本以来关系。
github地址:
https://github.com/kubernetes/ingress-nginx
安装文档:
https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide
如上图所示,官方推荐了三种安装方式:
– 使用”helm”安装;
– 使用”kubectl apply”创建yaml资源清单的方式进行安装;
– 使用第三方插件的方式进行安装;
7.2 添加第三方仓库
root@master231:~/manifests/helm# helm repo add ingress https://kubernetes.github.io/ingress-nginx
[root@master231 helm]# helm repo list
7.3 搜索Ingress-nginx的chart
[root@master231 helm]# helm search repo ingress-nginx
root@master231:~/manifests/helm# helm search repo ingress-nginx
NAME CHART VERSION APP VERSION DESCRIPTION
ysl-ingress/ingress-nginx 4.12.1 1.12.1 Ingress controller for Kubernetes using NGINX a…
root@master231:~/manifests/helm#
root@master231:~/manifests/helm# helm search repo ingress-nginx -l
7.4 下载指定的chart
[root@master231 helm]# helm pull ingress/ingress-nginx –version 4.2.5
7.5 解压软件包并修改配置
[root@master231 helm]# tar xf ingress-nginx-4.2.5.tgz
[root@master231 helm]#
[root@master231 helm]# sed -i ‘/registry:/s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com#g’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘s#ingress-nginx/controller#yangsenlin-k8s/ingress-nginx#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘s#ingress-nginx/kube-webhook-certgen#yangsenlin-k8s/ingress-nginx#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘s#v1.3.0#kube-webhook-certgen-v1.3.0#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -ri ‘/digest:/s@^@#@’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘/hostNetwork:/s#false#true#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘/dnsPolicy/s#ClusterFirst#ClusterFirstWithHostNet#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘/kind/s#Deployment#DaemonSet#’ ingress-nginx/values.yaml
[root@master231 helm]# sed -i ‘/default:/s#false#true#’ ingress-nginx/values.yaml
7.6 安装ingress-nginx
root@master231:~/manifests/helm# helm upgrade –install my-ingress ingress-nginx -n ingress-nginx –create-namespace
NAME: my-ingress
LAST DEPLOYED: Thu Apr 24 18:58:07 2025
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running ‘kubectl –namespace ingress-nginx get services -o wide -w my-ingress-ingress-nginx-controller’
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
– host: www.example.com
http:
paths:
– pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
– hosts:
– www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt:
tls.key:
type: kubernetes.io/tls
7.7 验证Ingress-nginx是否安装成功
root@master231:~/manifests/helm# helm list -n ingress-nginx
root@master231:~# kubectl get ingressclass,deploy,svc,po -n ingress-nginx
8 ingress映射http
8.1 概述
NodePort在暴露服务时,会监听一个NodePort端口,且多个服务无法使用同一个端口的情况。
因此我们说Service可以理解为四层代理。说白了,就是基于IP:PORT的方式进行代理。
假设”v1.ysl.com”的服务需要监听80端口,而”v2.ysl.com”和”v3.ysl.com”同时也需要监听80端口,svc就很难实现。
这个时候,我们可以借助Ingress来实现此功能,可以将Ingress看做七层代理,底层依旧基于svc进行路由。
而Ingress在K8S是内置的资源,表示主机到svc的解析规则,但具体实现需要安装附加组件(对应的是IngressClass),比如ingress-nginx,traefik等。
IngressClass和Ingress的关系优点类似于: nginx和nginx.conf的关系。
8.2 环境准备
root@master231:~/manifests/add-ones/ingress# cat 01-deploy-svc-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v1
spec:
replicas: 3
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v2
spec:
replicas: 3
selector:
matchLabels:
apps: v2
template:
metadata:
labels:
apps: v2
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
ports:
– containerPort: 80
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v3
spec:
replicas: 3
selector:
matchLabels:
apps: v3
template:
metadata:
labels:
apps: v3
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v3
ports:
– containerPort: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v1
spec:
type: ClusterIP
selector:
apps: v1
ports:
– port: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v2
spec:
type: ClusterIP
selector:
apps: v2
ports:
– port: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v3
spec:
type: ClusterIP
selector:
apps: v3
ports:
– port: 80
root@master231:~/manifests/add-ones/ingress# kubectl apply -f 01-deploy-svc-xiuxian.yaml
root@master231:~/manifests/add-ones/ingress# kubectl get pods –show-labels
NAME READY STATUS RESTARTS AGE LABELS
deploy-xiuxian-v1-686b885479-6cxwd 1/1 Running 0 2m4s apps=v1,pod-template-hash=686b885479
deploy-xiuxian-v1-686b885479-cd7dq 1/1 Running 0 2m4s apps=v1,pod-template-hash=686b885479
deploy-xiuxian-v1-686b885479-kj6cw 1/1 Running 0 2m4s apps=v1,pod-template-hash=686b885479
deploy-xiuxian-v2-58c47cb989-8lhrq 1/1 Running 0 2m4s apps=v2,pod-template-hash=58c47cb989
deploy-xiuxian-v2-58c47cb989-cn865 1/1 Running 0 2m4s apps=v2,pod-template-hash=58c47cb989
deploy-xiuxian-v2-58c47cb989-hgmt6 1/1 Running 0 2m4s apps=v2,pod-template-hash=58c47cb989
deploy-xiuxian-v3-6dc5df4467-69hrc 1/1 Running 0 2m4s apps=v3,pod-template-hash=6dc5df4467
deploy-xiuxian-v3-6dc5df4467-bnlhm 1/1 Running 0 2m4s apps=v3,pod-template-hash=6dc5df4467
deploy-xiuxian-v3-6dc5df4467-rf5xk 1/1 Running 0 2m4s apps=v3,pod-template-hash=6dc5df4467
root@master231:~/manifests/add-ones/ingress#
8.3 编写ingress规则
root@master231:~/manifests/add-ones/ingress# cat 02-ingress-xiuxian.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-xiuxian
spec:
ingressClassName: nginx
rules:
– host: v1.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v1
port:
number: 80
path: /
– host: v2.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v2
port:
number: 80
path: /
– host: v3.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v3
port:
number: 80
path: /
root@master231:~/manifests/add-ones/ingress# kubectl apply -f 02-ingress-xiuxian.yaml
ingress.networking.k8s.io/ingress-xiuxian created
root@master231:~/manifests/add-ones/ingress# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-xiuxian nginx v1.ysl.com,v2.ysl.com,v3.ysl.com 80 11s
root@master231:~/manifests/add-ones/ingress#
root@master231:~/manifests/add-ones/ingress# cat 02-ingress-xiuxian.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-xiuxian
spec:
ingressClassName: nginx
rules:
– host: v1.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v1
port:
number: 80
path: /
– host: v2.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v2
port:
number: 80
path: /
– host: v3.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v3
port:
number: 80
path: /
root@master231:~/manifests/add-ones/ingress# kubectl apply -f 02-ingress-xiuxian.yaml
ingress.networking.k8s.io/ingress-xiuxian created
root@master231:~/manifests/add-ones/ingress# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-xiuxian nginx v1.ysl.com,v2.ysl.com,v3.ysl.com 80 11s
root@master231:~/manifests/add-ones/ingress# kubectl describe ingress
Name: ingress-xiuxian
Labels:
Namespace: default
Address: 10.0.0.151
Default backend: default-http-backend:80 (
Rules:
Host Path Backends
—- —- ——–
v1.ysl.com
/ svc-xiuxian-v1:80 (10.100.1.149:80,10.100.1.150:80,10.100.2.74:80)
v2.ysl.com
/ svc-xiuxian-v2:80 (10.100.1.152:80,10.100.2.73:80,10.100.2.75:80)
v3.ysl.com
/ svc-xiuxian-v3:80 (10.100.1.151:80,10.100.2.76:80,10.100.2.77:80)
Annotations:
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal Sync 2m46s (x2 over 3m30s) nginx-ingress-controller Scheduled for sync
Normal Sync 2m45s (x2 over 3m30s) nginx-ingress-controller Scheduled for sync
root@master231:~/manifests/add-ones/ingress#
8.5 windows添加解析
8.6 访问ingress-nginx服务
8.7 ingress和ingress class底层原理
root@master231:~/manifests/add-ones/ingress# kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-ingress-ingress-nginx-controller-h949x 1/1 Running 3 (68m ago) 17h 10.0.0.233 worker233
my-ingress-ingress-nginx-controller-rmjjk 1/1 Running 5 (68m ago) 17h 10.0.0.232 worker232
root@master231:~/manifests/add-ones/ingress#
root@master231:~/manifests/add-ones/ingress# kubectl -n ingress-nginx exec -it my-ingress-ingress-nginx-controller-h949x — sh
/etc/nginx $ grep ysl.com /etc/nginx/nginx.conf
## start server v1.ysl.com
server_name v1.ysl.com ;
## end server v1.ysl.com
## start server v2.ysl.com
server_name v2.ysl.com ;
## end server v2.ysl.com
## start server v3.ysl.com
server_name v3.ysl.com ;
## end server v3.ysl.com
9 ingress映射https
9.1 生成证书文件
root@master231:~/manifests/add-ones/ingress/02-casedemo-https# openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj “/CN=www.ysl.com”
9.2 将证书文件以secrets形式存储
root@master231:~/manifests/add-ones/ingress/02-casedemo-https# kubectl create secret tls ca-secret –cert=tls.crt –key=tls.key
[root@master231 02-casedemo-https]# kubectl get secrets ca-secret
NAME TYPE DATA AGE
ca-secret kubernetes.io/tls 2 2m37s
9.3 部署测试
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-apple
spec:
replicas: 3
selector:
matchLabels:
apps: apple
template:
metadata:
labels:
apps: apple
spec:
containers:
– name: apple
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:apple
ports:
– containerPort: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-apple
spec:
selector:
apps: apple
ports:
– protocol: TCP
port: 80
targetPort: 80
[root@master231 02-casedemo-https]# kubectl apply -f deploy-apple.yaml
[root@master231 02-casedemo-https]# kubectl get pods –show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-apple-697d97ff95-gwr8v 1/1 Running 0 19s apps=apple,pod-template-hash=697d97ff95
deployment-apple-697d97ff95-lzspk 1/1 Running 0 19s apps=apple,pod-template-hash=697d97ff95
deployment-apple-697d97ff95-ppsjd 1/1 Running 0 19s apps=apple,pod-template-hash=697d97ff95
[root@master231 02-casedemo-https]#
9.4 配置ingress添加tls证书
[root@master231 02-casedemo-https]# cat ingress-tls.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-tls-https
# 如果指定了”ingressClassName”参数,就不需要在这里重复声明啦。
# 如果你的K8S 1.22- 版本,则使用注解的方式进行传参即可。
#annotations:
# kubernetes.io/ingress.class: “nginx”
spec:
# 指定Ingress Class,要求你的K8S 1.22+
ingressClassName: nginx
rules:
– host: www.ysl.com
http:
paths:
– backend:
service:
name: svc-apple
port:
number: 80
path: /
pathType: ImplementationSpecific
# 配置https证书
tls:
– hosts:
– www.ysl.com
secretName: ca-secret
[root@master231 02-casedemo-https]#
[root@master231 02-casedemo-https]# kubectl get ingress ingress-tls-https
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-tls-https nginx www.ysl.com 80, 443 10s
[root@master231 02-casedemo-https]#
ingress-tls-https nginx www.ysl.com 10.0.0.151 80, 443 41s
[root@master231 02-casedemo-https]# kubectl describe ingress ingress-tls-https
Name: ingress-tls-https
Labels:
Namespace: default
Address: 10.0.0.151
Default backend: default-http-backend:80 (
TLS:
ca-secret terminates www.ysl.com
Rules:
Host Path Backends
—- —- ——–
www.ysl.com
/ svc-apple:80 (10.100.1.153:80,10.100.2.78:80,10.100.2.79:80)
Annotations:
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal Sync 38s (x2 over 57s) nginx-ingress-controller Scheduled for sync
Normal Sync 38s (x2 over 57s) nginx-ingress-controller Scheduled for sync
9.5 windews添加解析
10.0.0.233 www.ysl.com
9.6 访问测试
https://www.ysl.com/
如果google浏览器自建证书不认可,可以用鼠标在空白处单击左键,而后输入:”thisisunsafe”,就会自动跳转。
10 基于helm部署trafik
参考链接:
https://doc.traefik.io/traefik/getting-started/install-traefik/#use-the-helm-chart
10.1 添加仓库
root@master231:~# helm repo add traefik https://traefik.github.io/charts
10.2 更新仓库信息
root@master231:~# helm repo update
10.3 安装traefik
root@master231:~# helm pull traefik traefik/traefik
[root@master231 helm]# tar xf traefik-35.0.1.tgz
[root@master231 helm]# helm install traefik traefik
[root@master231 helm]# helm list
10.4 查看服务
[root@master231 02-casedemo-https]# kubectl get ingress,deploy,svc,pods
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/traefik 1/1 1 1 162m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.200.0.1
service/traefik LoadBalancer 10.200.197.65 10.0.0.152 80:21519/TCP,443:13173/TCP 162m
NAME READY STATUS RESTARTS AGE
pod/traefik-ccd698b77-2zpbq 1/1 Running 0 162m
[root@master231 02-casedemo-https]#
10.5 创建测试案例
[root@master231 03-casedemo-http-traefik]# cat 01-deploy-svc-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v1
spec:
replicas: 3
selector:
matchLabels:
apps: v1
template:
metadata:
labels:
apps: v1
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v1
ports:
– containerPort: 80
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v2
spec:
replicas: 3
selector:
matchLabels:
apps: v2
template:
metadata:
labels:
apps: v2
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
ports:
– containerPort: 80
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-xiuxian-v3
spec:
replicas: 3
selector:
matchLabels:
apps: v3
template:
metadata:
labels:
apps: v3
spec:
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v3
ports:
– containerPort: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v1
spec:
type: ClusterIP
selector:
apps: v1
ports:
– port: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v2
spec:
type: ClusterIP
selector:
apps: v2
ports:
– port: 80
—
apiVersion: v1
kind: Service
metadata:
name: svc-xiuxian-v3
spec:
type: ClusterIP
selector:
apps: v3
ports:
– port: 80
[root@master231 03-casedemo-http-traefik]# cat 02-ingress-xiuxian.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-xiuxian
spec:
# ingressClassName: nginx
ingressClassName: traefik
rules:
– host: v1.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v1
port:
number: 80
path: /
– host: v2.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v2
port:
number: 80
path: /
– host: v3.ysl.com
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-xiuxian-v3
port:
number: 80
path: /
[root@master231 03-casedemo-http-traefik]#
[root@master231 03-casedemo-http-traefik]# kubectl apply -f .
deployment.apps/deploy-xiuxian-v1 created
deployment.apps/deploy-xiuxian-v2 created
deployment.apps/deploy-xiuxian-v3 created
service/svc-xiuxian-v1 created
service/svc-xiuxian-v2 created
service/svc-xiuxian-v3 created
ingress.networking.k8s.io/ingress-xiuxian created
[root@master231 03-casedemo-http-traefik]#
[root@master231 03-casedemo-http-traefik]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-xiuxian traefik v1.ysl.com,v2.ysl.com,v3.ysl.com 10.0.0.152 80 28s
[root@master231 03-casedemo-http-traefik]#
[root@master231 03-casedemo-http-traefik]# kubectl describe ingress
Name: ingress-xiuxian
Labels:
Namespace: default
Address: 10.0.0.152
Default backend: default-http-backend:80 (
Rules:
Host Path Backends
—- —- ——–
v1.ysl.com
/ svc-xiuxian-v1:80 (10.100.1.155:80,10.100.2.81:80,10.100.2.84:80)
v2.ysl.com
/ svc-xiuxian-v2:80 (10.100.1.154:80,10.100.2.82:80,10.100.2.85:80)
v3.ysl.com
/ svc-xiuxian-v3:80 (10.100.1.156:80,10.100.1.157:80,10.100.2.83:80)
Annotations:
Events:
10.6 windows添加解析
10.0.0.152 v1.ysl.com v2.ysl.com v3.ysl.com
10.7 访问测试
http://v1.ysl.com
http://v2.ysl.com
http://v3.ysl.com
11 traefik开启dashboard
11.1 拉取最新的chart
[root@master231 helm]# helm pull traefik/traefik
11.2 解压软件包
[root@master231 helm]# tar xf traefik-35.0.1.tgz
11.3 开启dashboard参数
[root@master231 helm]# vi traefik/values.yaml
187 ingressRoute:
188 dashboard:
189 # — Create an IngressRoute for the dashboard
190 #enabled: false
191 enabled: true
11.4 重新安装traefik
[root@master231 helm]# helm uninstall traefik
[root@master231 helm]# helm install mytraefik traefik
11.5 开启端口转发
[root@master231 helm]# kubectl port-forward –address=0.0.0.0 mytraefik-7f7b4f766c-zv5x5 8080:8080
Forwarding from 0.0.0.0:8080 -> 8080
11.6 访问测试traefik的WebUI
http://10.0.0.231:8080/dashboard/
12 测试cm,deploy,ingress,svc联动
12.1 编写资源清单
[root@master231 04-casedemo-games]# cat deployments.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ysl-games
spec:
replicas: 3
selector:
matchLabels:
apps: game
template:
metadata:
labels:
apps: game
spec:
volumes:
– name: conf
configMap:
name: cm-game
items:
– key: games.conf
path: games.conf
containers:
– name: c1
image: forestysl2020/ysl-games:v0.6
volumeMounts:
– name: conf
mountPath: /etc/nginx/conf.d/games.conf
subPath: games.conf
ports:
– containerPort: 81
name: game
livenessProbe:
httpGet:
port: 81
path: /index.html
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
port: 81
path: /index.html
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
startupProbe:
httpGet:
port: 81
path: /index.html
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
[root@master231 04-casedemo-games]# cat services.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-games
spec:
type: ClusterIP
selector:
apps: game
ports:
– port: 81
[root@master231 04-casedemo-games]# cat configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-game
data:
games.conf: |
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/bird/;
server_name game01.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/pinshu/;
server_name game02.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/tanke/;
server_name game03.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/chengbao/;
server_name game04.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/motuo/;
server_name game05.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/liferestart/;
server_name game06.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/huangjinkuanggong/;
server_name game07.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/feijidazhan/;
server_name game08.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/zhiwudazhanjiangshi/;
server_name game09.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/xiaobawang/;
server_name game10.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/pingtai/;
server_name game11.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/dayu/;
server_name game12.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/maliao/;
server_name game13.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/menghuanmonizhan/;
server_name game14.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/qieshuiguo/;
server_name game15.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/wangzhezhicheng/;
server_name game16.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/zhiwuVSjiangshi/;
server_name game17.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/doudizhu/;
server_name game18.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/killbird/;
server_name game19.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/tankedazhan/;
server_name game20.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/buyu/;
server_name game21.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/huimiejiangshi/;
server_name game22.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/renzhegame/;
server_name game23.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/tankedazhan/;
server_name game24.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/tiaoyitiao/;
server_name game25.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/yangyang/;
server_name game26.ysl.com;
}
server {
listen 0.0.0.0:81;
root /usr/local/nginx/html/zombie-master/;
server_name game27.ysl.com;
}
[root@master231 04-casedemo-games]# cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-games
spec:
# ingressClassName: nginx
ingressClassName: mytraefik
rules:
– host: “*.ysl.com”
http:
paths:
– pathType: Prefix
backend:
service:
name: svc-games
port:
number: 81
path: /
[root@master231 04-casedemo-games]#
12.2 访问测试
13 ingress基础作用说明
Ingress-nginx 是一个基于 Kubernetes 的开源项目,它使用 NGINX 作为反向代理和负载均衡器来管理 Kubernetes 集群中的 HTTP 和 HTTPS 流量。以下是它的主要功能和作用:
### 1. **反向代理**
– **定义**:Ingress-nginx 作为反向代理,接收外部访问请求,并将请求转发到集群内的后端服务。
– **作用**:用户通过一个统一的入口访问集群内的多个服务,而无需直接暴露每个服务的 IP 地址或端口。例如,一个用户访问 `http://example.com/api`,Ingress-nginx 可以将请求转发到后端的 `api-service`。
### 2. **负载均衡**
– **定义**:Ingress-nginx 可以将流量分发到多个后端服务实例,实现负载均衡。
– **作用**:当多个后端服务实例运行时,Ingress-nginx 可以根据配置的策略(如轮询、最少连接等)将流量均匀分配到各个实例,提高系统的可用性和性能。例如,如果有三个 `web-service` 实例,Ingress-nginx 可以将流量均匀分配给这三个实例。
### 3. **基于路径和域名的路由**
– **定义**:Ingress-nginx 支持基于路径和域名的路由规则。
– **作用**:
– **基于路径**:可以根据请求的 URL 路径将流量转发到不同的后端服务。例如,`http://example.com/api` 转发到 `api-service`,`http://example.com/ui` 转发到 `ui-service`。
– **基于域名**:可以根据请求的域名将流量转发到不同的服务。例如,`api.example.com` 转发到 `api-service`,`ui.example.com` 转发到 `ui-service`。
### 4. **SSL/TLS 终止**
– **定义**:Ingress-nginx 可以管理 SSL/TLS 证书,并在入口处终止加密连接。
– **作用**:用户可以通过 HTTPS 访问集群内的服务,而无需在每个服务中单独配置 SSL/TLS。Ingress-nginx 可以解密请求,然后以明文或加密方式转发到后端服务。这简化了 SSL/TLS 的管理,提高了安全性。
### 5. **支持 WebSocket 和 gRPC**
– **定义**:Ingress-nginx 支持 WebSocket 和 gRPC 等现代协议。
– **作用**:WebSocket 用于实时通信,gRPC 用于高性能的微服务通信。Ingress-nginx 可以正确处理这些协议的流量,确保它们能够正常转发到后端服务。
### 6. **支持自定义 NGINX 配置**
– **定义**:Ingress-nginx 允许用户通过注解(Annotations)或自定义配置文件来修改 NGINX 的行为。
– **作用**:用户可以根据自己的需求调整 NGINX 的配置,例如设置超时时间、启用压缩、配置重写规则等。这提供了高度的灵活性,满足不同场景的需求。
### 7. **集成 Kubernetes 生态**
– **定义**:Ingress-nginx 是 Kubernetes 生态系统的一部分,与 Kubernetes 的其他组件(如 Service、Deployment 等)紧密集成。
– **作用**:它可以通过 Kubernetes 的 API 动态更新配置,例如当后端服务的副本数量发生变化时,Ingress-nginx 会自动更新负载均衡的配置,确保流量正确转发。
Ingress-nginx 是 Kubernetes 集群中管理外部访问流量的重要组件,它提供了强大的功能,简化了集群内服务的暴露和管理。
14 helm的chart常用字段说明
https://helm.sh/zh/docs/chart_template_guide/function_list/
15 helm的chart打包并推送到harbor仓库
15.1 打包
[root@master231 helm]# helm package ingress-nginx
Successfully packaged chart and saved it to: /manifests/helm/ingress-nginx-4.2.5.tgz
[root@master231 helm]#
15.2 推送到harbor仓库
[root@master231 helm]# helm push ysl-games-25.02.21.tgz oci://harbor.ysl.com/ysl-helm –insecure-skip-tls-verify
Pushed: harbor.ysl.com/ysl-helm/ysl-games:25.02.21
Digest: sha256:928b8f64e94b8c5fe158d582b65ee8310c8ca9d0b2c391d81a4822dc4c8adaed
[root@master231 helm]#
15.3 从harbor拉取Chart
[root@master231 tmp]# helm pull oci://harbor.ysl.com/ysl-helm/ysl-games –version 25.02.21 –insecure-skip-tls-verifyPulled: harbor.ysl.com/ysl-helm/ysl-games:25.02.21
Digest: sha256:928b8f64e94b8c5fe158d582b65ee8310c8ca9d0b2c391d81a4822dc4c8adaed
[root@master231 tmp]#
48 flannel的工作原理及优化
1 Flannel的工作模式
– udp:
早期支持的一种工作模式,由于性能差,目前官方已弃用。
– vxlan:
将源数据报文进行封装为二层报文(需要借助物理网卡转发),进行跨主机转发。
– host-gw:
将容器网络的路由信息写到宿主机的路由表上。尽管效率高,但不支持跨网段。
– directrouting:
将vxlan和host-gw工作模式工作。
2 flannel模式测试
[root@master231 flannel]# cat 01-pods-multiple-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apps-v1
spec:
replicas: 1
selector:
matchLabels:
version: v1
template:
metadata:
labels:
version: v1
spec:
nodeName: worker232
containers:
– name: c1
ports:
– containerPort: 80
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: apps-v2
spec:
replicas: 1
selector:
matchLabels:
version: v2
template:
metadata:
labels:
version: v2
spec:
nodeName: worker233
containers:
– name: c2
ports:
– containerPort: 80
image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
Flannel实现同节点Pod数据传输cni0的工作原理:
1.假设有2个Pod,名称为p1和p2;
2.p1的eth0网卡对端设备连接在cni0的网桥设备上;
3.P2的eth0网卡对端设备也连接在cni0的网桥设备上;
4.数据传输时,可以基于cni0进行数据交换传输;
tips:
1.移除cni0网卡后,同节点的2个Pod将无法通信。
2.手动添加cni0后,虚拟网卡未能绑定到cni0上,需要手动添加(见课堂演示)
3 验证
[root@master231 flannel]# tcpdump -i eth0 -en host 10.0.0.232 and udp port 8472 | grep 2.135 -A 2 -B 2
4 修改falnnel的工作模式
[root@master231 cni]# vim kube-flannel.yml
…
net-conf.json: |
{
“Network”: “10.100.0.0/16”,
“Backend”: {
“Type”: “host-gw”
}
}
…
[root@master231 ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel unchanged
serviceaccount/flannel unchanged
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds unchanged
[root@master231 ~]#
5 测试验证
[root@master231 falnnel]# ifconfig cni0
cni0: flags=4163
inet 10.100.0.1 netmask 255.255.255.0 broadcast 10.100.0.255
inet6 fe80::14d5:5dff:fefa:84be prefixlen 64 scopeid 0x20
ether 16:d5:5d:fa:84:be txqueuelen 1000 (Ethernet)
RX packets 54028 bytes 10239711 (10.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 59908 bytes 43523237 (43.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@master231 falnnel]#
[root@master231 falnnel]# route -n | grep 10.100
10.100.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.100.1.0 10.100.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.100.2.0 10.100.2.0 255.255.255.0 UG 0 0 0 flannel.1
[root@master231 falnnel]#
[root@master231 falnnel]# kubectl get pods -n kube-flannel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-57zww 1/1 Running 1 (114m ago) 8d 10.0.0.233 worker233
kube-flannel-ds-5gv68 1/1 Running 1 (114m ago) 8d 10.0.0.232 worker232
kube-flannel-ds-m98sv 1/1 Running 1 (114m ago) 10d 10.0.0.231 master231
[root@master231 falnnel]#
[root@master231 falnnel]# kubectl -n kube-flannel delete pods –all
pod “kube-flannel-ds-57zww” deleted
pod “kube-flannel-ds-5gv68” deleted
pod “kube-flannel-ds-m98sv” deleted
[root@master231 falnnel]#
[root@master231 falnnel]# kubectl get pods -n kube-flannel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-8gq28 1/1 Running 0 14s 10.0.0.232 worker232
kube-flannel-ds-kpv8b 1/1 Running 0 13s 10.0.0.233 worker233
kube-flannel-ds-p98vs 1/1 Running 0 13s 10.0.0.231 master231
[root@master231 falnnel]#
[root@master231 falnnel]# route -n | grep 10.100
10.100.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.100.1.0 10.0.0.232 255.255.255.0 UG 0 0 0 eth0
10.100.2.0 10.0.0.233 255.255.255.0 UG 0 0 0 eth0
[root@master231 falnnel]#
6 切换flannel的工作模式为”Directrouting”(推荐配置)
[root@master231 cni]# vim kube-flannel.yml
…
net-conf.json: |
{
“Network”: “10.100.0.0/16”,
“Backend”: {
“Type”: “vxlan”,
“Directrouting”: true
}
}
…
[root@master231 ~]#
[root@master231 ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel unchanged
serviceaccount/flannel unchanged
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds unchanged
[root@master231 ~]#
7 删除Pod测试
[root@worker233 ~]# route -n | grep 10.100
10.100.0.0 10.100.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.100.1.0 10.100.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.100.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
[root@worker233 ~]#
[root@worker233 ~]#
[root@worker233 ~]# route -n | grep 10.100
10.100.0.0 10.0.0.231 255.255.255.0 UG 0 0 0 eth0
10.100.1.0 10.0.0.232 255.255.255.0 UG 0 0 0 eth0
10.100.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
[root@worker233 ~]#
49 kubeapps环境部署及故障排查
– kubeapps环境部署及故障排查
推荐阅读:
https://github.com/vmware-tanzu/kubeapps/
官方的Chart【存在问题,需要去docker官方拉取数据】
https://github.com/bitnami/charts/tree/main/bitnami/kubeapps
温馨提示:
官方对于kubeapps的文档会去从docker官网拉取镜像,国内因素可能无法访问。
1.添加第三方仓库
[root@master231 helm]# helm repo add bitnami https://charts.bitnami.com/bitnami
“bitnami” has been added to your repositories
[root@master231 helm]#
2.查看仓库信息
[root@master231 helm]# helm repo list
NAME URL
ysl-ingress https://kubernetes.github.io/ingress-nginx
traefik https://traefik.github.io/charts
openebs https://openebs.github.io/openebs
bitnami https://charts.bitnami.com/bitnami
[root@master231 helm]#
3.更新repo源
[root@master231 helm]# helm repo update
Hang tight while we grab the latest from your chart repositories…
…Successfully got an update from the “openebs” chart repository
…Successfully got an update from the “bitnami” chart repository
…Successfully got an update from the “ysl-ingress” chart repository
…Successfully got an update from the “traefik” chart repository
Update Complete. ⎈Happy Helming!⎈
[root@master231 helm]#
4.搜索kubeapps
[root@master231 helm]# helm search repo kubeapps -l
NAME CHART VERSION APP VERSION DESCRIPTION
…
bitnami/kubeapps 12.4.12 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.11 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.10 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.9 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.8 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.7 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.6 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.5 2.8.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.4 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.3 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.2 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.4.1 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.3.3 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.3.2 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.3.1 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.2.10 2.7.0 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.2.9 2.6.4 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.2.8 2.6.4 Kubeapps is a web-based UI for launching and ma…
bitnami/kubeapps 12.2.7 2.6.4 Kubeapps is a web-based UI for launching and ma…
…
[root@master231 helm]#
5.拉取Chart的tar包到本地
[root@master231 helm]# helm pull bitnami/kubeapps –version 12.2.10
[root@master231 helm]#
[root@master231 helm]# ll kubeapps-12.2.10.tgz
-rw-r–r– 1 root root 221609 Feb 21 18:05 kubeapps-12.2.10.tgz
[root@master231 helm]#
5.解压tar包
[root@master231 helm]# tar xf kubeapps-12.2.10.tgz
[root@master231 helm]#
[root@master231 helm]# ll kubeapps
total 256
drwxr-xr-x 5 root root 4096 Feb 21 18:06 ./
drwxr-xr-x 8 root root 4096 Feb 21 18:06 ../
-rw-r–r– 1 root root 406 Aug 26 2023 Chart.lock
drwxr-xr-x 5 root root 4096 Feb 21 18:06 charts/
-rw-r–r– 1 root root 1764 Aug 26 2023 Chart.yaml
drwxr-xr-x 2 root root 4096 Feb 21 18:06 crds/
-rw-r–r– 1 root root 421 Aug 26 2023 .helmignore
-rw-r–r– 1 root root 136307 Aug 26 2023 README.md
drwxr-xr-x 7 root root 4096 Feb 21 18:06 templates/
-rw-r–r– 1 root root 3361 Aug 26 2023 values.schema.json
-rw-r–r– 1 root root 85725 Aug 26 2023 values.yaml
[root@master231 helm]#
6.安装kubeapps
[root@master231 helm]# helm install mykubeapps kubeapps
NAME: mykubeapps
LAST DEPLOYED: Fri Feb 21 18:10:00 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kubeapps
CHART VERSION: 12.2.10
APP VERSION: 2.7.0** Please be patient while the chart is being deployed **
Tip:
Watch the deployment status using the command: kubectl get pods -w –namespace default
Kubeapps can be accessed via port 80 on the following DNS name from within your cluster:
mykubeapps.default.svc.cluster.local
To access Kubeapps from outside your K8s cluster, follow the steps below:
1. Get the Kubeapps URL by running these commands:
echo “Kubeapps URL: http://127.0.0.1:8080”
kubectl port-forward –namespace default service/mykubeapps 8080:80
2. Open a browser and access Kubeapps using the obtained URL.
[root@master231 helm]#
7.修改svc的类型并访问WebUI
http://10.0.0.152/#/login
8.创建sa并获取token
[root@master231 helm]# cat sa-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ysl
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-ysl
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: ServiceAccount
name: ysl
namespace: default
[root@master231 helm]#
[root@master231 helm]# kubectl apply -f sa-admin.yaml
serviceaccount/ysl unchanged
clusterrolebinding.rbac.authorization.k8s.io/cluster-ysl created
[root@master231 helm]#
[root@master231 helm]# kubectl get secrets `kubectl get sa ysl -o jsonpath='{.secrets[0].name}’` -o jsonpath='{.data.token}’ | base64 -d ;echo
eyJhbGciOiJSUzI1NiIsImtpZCI6IkZUd19obzc4VmdGM1pzbnBTazlpRGlKQU9HQnV4c2ZZNHdhQnpkLWNOa1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImxpbnV4OTUtdG9rZW4tdnc3aGQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibGludXg5NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZhY2MyMTdlLTljNjYtNDU3NC05YWRkLWE2MjVlZWVlM2E0ZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmxpbnV4OTUifQ.K4fSPCS2VdEVEY3OCbHxY3AXqiPeiiURAMXwiQXBw7u5rCCyw0KsVI8gNMswCp5UNTzgOJBfQlOVpaLkyDMJF3U_47CuV4YrEC-Q2zY7iSv80EspWAqgN5wrgT_Wp_v52HY9wlmWfqFHqf8CilxjK0Mxp_zB-3j_nZi-HHYPaE3AXVcm23a_FeS1QI5SjzYKvJijJh9onSwN2P7OtuPo0KkconUc8y4CweQM4OFYXZln8x69MoUkST6RLNjhdJwVrqMDlpKLt3a5rfZMQHjZmbpeQjy49xKAfNkYHwm2jSt7dRZ63mP36rEZTxJhuBryPJ5bke5UjbdF8kkac0H3qg
[root@master231 helm]#
9.登录kubeapps的WebUI
使用上一步的token进行登录即可.
50 认证体系架构
1 API-Server内置的访问控制机制
API Server的访问方式:
– 集群外部: https://IP:Port
– 集群内部: https://kubernetes.default.svc
1.1 API Server内置了插件化的访问控制机制(每种访问控制机制均有一组专用的插件栈)
1.2 认证(Authentication):
核验请求者身份的合法性,进行身份识别,验证客户端身份。
身份核验过程遵循“或”逻辑,且任何一个插件核验成功后都将不再进行后续的插件验证。
均不成功,则失败,或以“匿名者”身份访问,建议禁用“匿名者”。
1.3 授权(Authorization):
核验请求的操作是否获得许可,验证客户端是否有权限操作资源对象。
鉴权过程遵循“或”逻辑,且任何一个插件对操作的许可授权后都将不再进行后续的插件验证。
均未许可,则拒绝请求的操作
1.4 准入控制(Admission Control):
检查操作内容是否合规,仅同”写”请求相关,负责实现”检验”字段类型是否合法及和补全默认字段。
内容合规性检查过程遵循“与”逻辑,且无论成败,每次的操作请求都要经由所有插件的检验。
将数据写入etcd前,负责检查内容的有效性,因此仅对“写”操作有效。
分两类:validating(校验)和 mutating(补全或订正)。
2.身份认证策略
2.1 X.509客户端证书认证:
在双向TLS通信中,客户端持有数字证书信任的CA,需要在kube-apiserver程序启动时,通过–client-ca-file选项传递。
认证通过后,客户端数字证书中的CN(Common Name)即被识别为用户名,而O(Organization)被识别为组名。
kubeadm部署的K8s集群,默认使用”/etc/kubernetes/pki/ca.crt”(各组件间颁发数字证书的CA)进行客户端认证。
2.2 持有者令牌:
– 1.静态令牌文件(Static Token File):
令牌信息保存于文本文件中,由kube-apiserver在启动时通过–token-auth-file选项加载。
加载完成后的文件变动,仅能通过重启程序进行重载,因此,相关的令牌会长期有效。
客户端在HTTP请求中,通过“Authorization: Bearer TOKEN”标头附带令牌令牌以完成认证。
– 2.Bootstrap令牌:
一般用于加入集群时使用,尤其是在集群的扩容场景时会用到。
– 3.Service Account令牌:
该认证方式将由kube-apiserver程序内置直接启用,它借助于经过签名的Bearer Token来验证请求。
签名时使用的密钥可以由–service-account-key-file选项指定,也可以默认使用API Server的tls私钥
用于将Pod认证到API Server之上,以支持集群内的进程与API Server通信。
K8s可使用ServiceAccount准入控制器自动为Pod关联ServiceAccount。
– 4.OIDC(OpenID Connect)令牌:
有点类似于”微信”,”支付宝”认证的逻辑,自建的话需要配置认证中心。
OAuth2认证机制,通常由底层的IaaS服务所提供。
– 5.Webhook令牌:
基于web的形式进行认证,比如之前配置的”钉钉机器人”,”微信机器人”等;
是一种用于验证Bearer Token的回调机制,能够扩展支持外部的认证服务,例如LDAP等。
2.3 身份认证代理(Authenticating Proxy):
由kube-apiserver从请求报文的特定HTTP标头中识别用户身份,相应的标头名称可由特定的选项配置指定。
kube-apiserver应该基于专用的CA来验证代理服务器身份。
– 匿名请求:
生产环境中建议禁用匿名认证。
3 Kubernetes上的用户
“用户”即服务请求者的身份指代,一般使用身份标识符进行识别,比如用户名,用户组,服务账号,匿名用户等。
Kubernetes系统的用户大体可分Service Account,User Account和Anonymous Account。
3.1 Service Account:
Kubernetes内置的资源类型,用于Pod内的进程访问API Server时使用的身份信息。
引用格式: “system:serviceaccount:NAMESPACE:SA_NAME”
3.2 User Account:
用户账户,指非Pod类的客户端访问API Server时使用的身份标识,一般是现实中的“人”。
API Server没有为这类账户提供保存其信息的资源类型,相关的信息通常保存于外部的文件或认证系统中。
身份核验操作可由API Server进行,也可能是由外部身份认证服务完成。
可以手动定义证书,其中O字段表示组,CN字段表示用户名。
3.3 Anonymous Account:
不能被识别为Service Account,也不能被识别为User Account的用户。
这类账户K8S系统称之为”system:anonymous”,即“匿名用户”。
4 静态令牌文件认证测试
4.1 模拟生成token
root@master231:~# echo “$(openssl rand -hex 3).$(openssl rand -hex 8)”
6e8164.d35ac8f098338c52
root@master231:~# echo “$(openssl rand -hex 3).$(openssl rand -hex 8)”
ca43c1.a0f3fe638b9b80d3
root@master231:~# echo “$(openssl rand -hex 3).$(openssl rand -hex 8)”
673877.9a3066f2a2c2d160
root@master231:~#
4.2 创建csv文件
root@master231:~# cd /etc/kubernetes/pki/
root@master231:/etc/kubernetes/pki# vi token.csv
root@master231:/etc/kubernetes/pki# cat token.csv
fd6f1d.0b58191d3d726a69,yangsenlin,10001,k8s
79ac1a.52d68499662b2b52,forest,10002,k8s
f7a5b9.132ce8cf5643cf2b,ysl,10003,k3s
文件格式为CSV,每行定义一个用户,由“令牌、用户名、用户ID和所属的用户组”四个字段组成,用户组为可选字段
具体格式: token,user,uid,”group1,group2,group3″
4.3 修改api-server参数加载token文件
root@master231:/etc/kubernetes# cat /etc/kubernetes/manifests/kube-apiserver.yaml -n
….
14 – command:
15 – kube-apiserver
16 – –token-auth-file=/etc/kubernetes/pki/token.csv
….
99 – mountPath: /etc/kubernetes/pki/token.csv
100 readOnly: true
101 hostNetwork: true
…
131 – hostPath:
132 path: /etc/kubernetes/pki/token.csv
133 type: File
134 name: yangsenlin-static-token-file
135 status: {}
4.4 在worker节点用kubectl使用token认证并指定api-server证书
root@worker232:/etc/kubernetes/pki# kubectl –server=https://10.0.0.231:6443 –token=fd6f1d.0b58191d3d726a69 –certificate-authority=/etc/kubernetes/pki/ca.crt get nodes
Error from server (Forbidden): nodes is forbidden: User “yangsenlin” cannot list resource “nodes” in API group “” at the cluster scope
root@worker232:/etc/kubernetes/pki# kubectl –server=https://10.0.0.231:6443 –token=79ac1a.52d68499662b2b52 –certificate-authority=/etc/kubernetes/pki/ca.crt get nodes
Error from server (Forbidden): nodes is forbidden: User “forest” cannot list resource “nodes” in API group “” at the cluster scope
root@worker232:/etc/kubernetes/pki#
root@worker232:/etc/kubernetes/pki# kubectl –server=https://10.0.0.231:6443 –token=f7a5b9.132ce8cf5643cf2b –certificate-authority=/etc/kubernetes/pki/ca.crt get nodes
Error from server (Forbidden): nodes is forbidden: User “ysl” cannot list resource “nodes” in API group “” at the cluster scope
root@worker232:/etc/kubernetes/pki#
4.5 curl 基于token认证
不加认证信息,将被识别为匿名用户
root@worker232:~# curl -k https://10.0.0.231:6443
{
“kind”: “Status”,
“apiVersion”: “v1”,
“metadata”: {},
“status”: “Failure”,
“message”: “forbidden: User \”system:anonymous\” cannot get path \”/\””,
“reason”: “Forbidden”,
“details”: {},
“code”: 403
}root@worker232:~#
root@worker232:~# curl -k -H “Authorization: Bearer 79ac1a.52d68499662b2b52” https://10.0.0.231:6443/api/v1/pods
{
“kind”: “Status”,
“apiVersion”: “v1”,
“metadata”: {},
“status”: “Failure”,
“message”: “pods is forbidden: User \”forest\” cannot list resource \”pods\” in API group \”\” at the cluster scope”,
“reason”: “Forbidden”,
“details”: {
“kind”: “pods”
},
“code”: 403
5 X509数字证书认证
5.1 基于API-Server签发
5.1.1 创建证书签署请求的秘钥
root@master231:~# openssl genrsa -out forest.key 2048
root@master231:~# ll forest.key
-rw——- 1 root root 1704 May 1 20:43 forest.key
5.1.2 创建证书签署请求
root@master231:~# openssl req -new -key forest.key -out forest.csr -subj “/CN=forest/O=yangsenlin”
root@master231:~# ll forest.*
-rw-r–r– 1 root root 915 May 1 20:45 forest.csr
-rw——- 1 root root 1704 May 1 20:43 forest.key
CN:用户
O:组
5.1.3 将证书签署请求证书使用base64编码
root@master231:~# cat forest.csr | base64 | tr -d ‘\n’;echo
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2F6Q0NBVk1DQVFBd0pqRVBNQTBHQTFVRUF3d0dabTl5WlhOME1STXdFUVlEVlFRS0RBcDVZVzVuYzJWdQpiR2x1TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF3UWd5cnJpTnlRWTZlNnZKCjZSblI3cmFVUUYrK2hMaTVmVkNyZER6cnUwMmJxL2ZTemQ4SGlUYUN0dWNUdzRWQ1JuWWloM1hiRG1zWVVLYWgKZkVabi9lcWtmZ091WE0rbUpEQVAxdkdLT1FEVktyRlY4VGVUSGZzMXdSeE9xWTZjMDVqeHQzbkxvemFZd2ZUaQpqekx1Sm4vY3ovb1dPWU54MG1iQmlwQkg1bG45Q0pNb21iMlNnaFNqZ21aS3U1WTJjQjdaZjRQQ0JTb3A3anJ4ClVTUVFNKzIwN0g4V0RpMEZUeG04S09yYnlkc2p3RXFYSkJwSHNLbVBmdUtYOEI5QzBPSjdRSWlFTng1WkN1ME4KWURDc1IxZFI1WlhLaHhUNWNUdDVkMkdOanR6bDJBY3FPNmdjczZFWnMvakN0cG9kTmlGdFFXS1I5S011OWZieQpia091TFFJREFRQUJvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUxma2hTNW8vUHlUUlRWNlUwMjJic0J5CktLdzdLcEFJOHdMNXo5Y2NKckNzTnlOTXVaWkNsQ3h3U3dGdjNZaU91a1NldDNZU2MySmpta0xSd3N3TVBHbjAKRDljZmplVG83VXlyRW45elR6QnJvZXhHVGp1ZU51UmRoMDgvZi9HbFNMRklLNGhacFRWVjFKNi9WZGFqcjhBTAo1dURSc09VNHZ6dkFXK0dHcFB1SWlnakdsMnBBdlhHekgyODM5SkUycHhEMkdkci9iamVhYXlPb2pZdXBqaWdVCmNDbVBkUGZOK1Q4ckkzb2hqb1pReVM4YnpYeGdZaWFCNTkwWFV2dW9YR1NDMkUrU0FNWVdRZUJwYkJNWGEwQ3YKdllxQzJjdS9Tc2dQWno3SCtMQkpUbG1IcnM3bFNhMlR2ME8yUDNGUnU3TS92MEtVem5EVFJXVUdLR2Z1RDgwPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
5.1.4 收动创建csr资源
root@master231:~/manifests/auth# cat csr-forest.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: forest-csr
spec:
# 将证书签发请求使用base64编码
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2JEQ0NBVlFDQVFBd0p6RVJNQThHQTFVRUF3d0lhbUZ6YjI1NWFXNHhFakFRQmdOVkJBb01DVzlzWkdKdgplV1ZrZFRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDY0TUwwVVJCcWh1Mi9hCkltNGJLR0paMXlVMDczcEtpU1ZZa2xuZkkyVEY0QWptMDNSWGZkRXVvYVl6NGptME1idmtyOUpQZE5sMlY4QzgKa3o5ZnBGaGhyb2svOTE0MXNPRXRyWDhrbE1QZTY4aVhHTmpWOGR0VFBQcmxyVkV4bktxbkxUT3ova3hNYTdjZgpqSjlTUzhBcWhUSUVUbWwyR2xxS21FdUVhbmZDRS83NUdJQlM0Rm5XZDVKQm8ySzIzYVpQL3BUalBhSnNQcWtWCnlPZS9vZnUzUDZnaUFJZW5JOW0yTGNIcGRXcnZFbFpmeVJqTTFxcU0vbEhYZlpsNm0zVUxVVEdiQWZIcnNBTXEKbURNQzVrL2JNeFJ2V0ZrUkRHcGdnMEJMWUhtdStBaXlGQVBuM1R0NFFla2dDY3hDOTY0MnlUQ1VjTUZNSHNwYgpRT3h2MEZjQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCdi9Zb0ZYQ0N1S3lFeFlFSzFVQmtJCmphZlM4OWt5L1ZHWlBsSENkb0pYNk1SWHJjTFB6ck1kK0NIWVAvUGNRQ1lYVTQ3Y29pOGJlZWFaeVJjZXpxcVIKWGhyWFZqZTNqcWFRMjd5ODdiRUJZRlJWdnduYjJlazFCelRvOWNHTCs3VGpjMDk5SjhPWFFVSFY5dTQ3QUY0aApWRFRrT0h3MUFTWXJLdzFmN0N6N25TbHdZOEtwMkNmcm5tWlRnRFcvTnZsRHhQMmpZWjFLeE5EMFRHVk4rMk93CmlpMEtHY1BXdXEzclhJbjgxMzJKbjJGOS8vVm5ubyswK0ZheDJZOW53ZXBzQ1drQ0FXcnpWYWJKK3QvUGo1NisKVDhKc2pRMGRaQmtlREp4ckR3Z3htZ1k5aWhCMW9hMVowMXVlM01KNDNWV3VJc2NieTlCbXB6alJDUWdnUTI0aAotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
# 指定颁发证书的请求类型,仅支持如下三种,切均可以由kube-controllmanager中的“csrsigning”控制器发出。
# “kubernetes.io/kube-apiserver-client”:
# 颁发用于向kube-apiserver进行身份验证的客户端证书。
# 对该签名者的请求是Kubernetes控制器管理器从不自动批准。
#
# “kubernetes.io/kube-apiserver-client-kubelet”:
# 颁发kubelets用于向kube-apiserver进行身份验证的客户端证书。
# 对该签名者的请求可以由kube-controllermanager中的“csrapproving”控制器自动批准。
#
# “kubernetes.io/kubelet-serving”:
# 颁发kubelets用于服务TLS端点的证书,kube-apiserver可以连接到这些端点安全。
# 对该签名者的请求永远不会被kube-controllmanager自动批准。
signerName: kubernetes.io/kube-apiserver-client
# 指定证书的过期时间,此处我设置的是24h(3600*24=86400)
expirationSeconds: 864000
# 指定在颁发的证书中请求的一组密钥用法。
# 对TLS客户端证书的请求通常请求:
# “数字签名(digital signature)”、“密钥加密(key encipherment)”、“客户端身份验证(client auth)”。
# 对TLS服务证书的请求通常请求:
# “密钥加密(key encipherment)”、“数字签名(digital signature)”、“服务器身份验证(server auth)”。
#
# 有效值的值为: “signing”, “digital signature”, “content commitment”, “key encipherment”,”key agreement”,
# “data encipherment”, “cert sign”, “crl sign”, “encipher only”, “decipher only”, “any”, “server auth”,
# “client auth”, “code signing”, “email protection”, “s/mime”, “ipsec end system”, “ipsec tunnel”,”ipsec user”,
# “timestamping”, “ocsp signing”, “microsoft sgc”, “netscape sgc”。
usages:
– client auth
root@master231:~/manifests/auth# kubectl apply -f csr-forest.yaml
certificatesigningrequest.certificates.k8s.io/forest-csr created
root@master231:~/manifests/auth#
root@master231:~/manifests/auth# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATIO CONDITION
forest-csr 20s kubernetes.io/kube-apiserver-client kubernetes-admin 10d Pending
root@master231:~/manifests/auth#
此时状态为pending状态
5.1.5 手动签发证书
root@master231:~/manifests/auth# kubectl certificate approve forest-csr
certificatesigningrequest.certificates.k8s.io/forest-csr approved
root@master231:~/manifests/auth# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
forest-csr 107s kubernetes.io/kube-apiserver-client kubernetes-admin 10d Approved,Issued
root@master231:~/manifests/auth#
5.1.6 获取签发后的证书
root@master231:~/manifests/auth# kubectl get csr forest-csr -o jsonpath='{.status.certificate}’ | base64 -d > forest.crt
root@master231:~/manifests/auth# ll
total 16
drwxr-xr-x 2 root root 4096 May 1 20:51 ./
drwxr-xr-x 35 root root 4096 May 1 20:47 ../
-rw-r–r– 1 root root 3239 May 1 20:47 csr-forest.yaml
-rw-r–r– 1 root root 1119 May 1 20:51 forest.crt
root@master231:~/manifests/auth#
5.1.7 将证书拷贝到worker节点,便于后续使用
root@master231:~# ll forest/
total 20
drwxr-xr-x 2 root root 4096 May 1 20:52 ./
drwx—— 15 root root 4096 May 1 20:52 ../
-rw-r–r– 1 root root 1119 May 1 20:51 forest.crt
-rw-r–r– 1 root root 915 May 1 20:45 forest.csr
-rw——- 1 root root 1704 May 1 20:43 forest.key
root@master231:~# scp -r forest 10.0.0.232:~
root@master231:~# scp -r forest 10.0.0.233:~
5.1.8 客户端测试
root@worker233:~/forest# kubectl -s https://10.0.0.231:6443 –client-key forest.key –client-certificate forest.crt –insecure-skip-tls-verify get nodes
Error from server (Forbidden): nodes is forbidden: User “forest” cannot list resource “nodes” in API group “” at the cluster scope
6 kubeconfig的组成部分
6.1 概述
kubeconfig是YAML格式的文件,用于存储身份认证信息,以便于客户端加载并认证到API Server。
kubeconfig保存有认证到一至多个Kubernetes集群的相关配置信息,并允许管理员按需在各配置间灵活切换
clusters:
Kubernetes集群访问端点(API Server)列表。
users:
认证到API Server的身份凭据列表。
contexts:
将每一个user同可认证到的cluster建立关联的上下文列表。
current-context:
当前默认使用的context
6.2 为静态令牌认证的用户生成kubeconfig
6.2.1 创建一个集群
root@master231:~/forest# kubectl config set-cluster myk8s –embed-certs=true –certificate-authority=/etc/kubernetes/pki/ca.crt –server=”https://10.0.0.231:6443″ –kubeconfig=./yangsenlin-k8s.conf
Cluster “myk8s” set.
root@master231:~/forest# ll yangsenlin-k8s.conf
-rw——- 1 root root 1663 May 1 21:06 yangsenlin-k8s.conf
6.2.2 查看集群
root@master231:~/forest# kubectl config get-clusters –kubeconfig=./yangsenlin-k8s.conf
NAME
myk8s
root@master231:~/forest#
6.2.3 查看令牌文件
root@master231:~/forest# cat /etc/kubernetes/pki/token.csv
fd6f1d.0b58191d3d726a69,yangsenlin,10001,k8s
79ac1a.52d68499662b2b52,forest,10002,k8s
f7a5b9.132ce8cf5643cf2b,ysl,10003,k3s
root@master231:~/forest#
6.2.4 创建用户信息
root@master231:~/forest# kubectl config set-credentials yangsenlin –token==”fd6f1d.0b58191d3d726a69″ –kubeconfig=./yangsenlin-k8s.conf
User “yangsenlin” set.
6.2.5 查看用户信息
root@master231:~/forest# kubectl config get-users –kubeconfig=./yangsenlin-k8s.conf
NAME
yangsenlin
root@master231:~/forest#
6.2.6 定义上下文
root@master231:~/forest# kubectl config set-context yangsenlin@myk8s –user=yangsenlin –cluster=myk8s –kubeconfig=./yangsenlin-k8s.conf
Context “yangsenlin@myk8s” created.
root@master231:~/forest#
root@master231:~/forest# kubectl config set-context forest@myk8s –user=forest –cluster=myk8s –kubeconfig=./yangsenlin-k8s.conf
Context “forest@myk8s” created.
6.2.7 查看上下文
root@master231:~/forest# kubectl config get-contexts –kubeconfig=./yangsenlin-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
forest@myk8s myk8s forest
yangsenlin@myk8s myk8s yangsenlin
root@master231:~/forest#
6.2.8 定义当前使用的上下文
root@master231:~/forest# kubectl config use-context yangsenlin@myk8s –kubeconfig=./yangsenlin-k8s.conf
Switched to context “yangsenlin@myk8s”.
root@master231:~/forest#
6.2.9 查看当前使用的上下文
root@master231:~/forest# kubectl config current-context –kubeconfig=./yangsenlin-k8s.conf
yangsenlin@myk8s
root@master231:~/forest#
6.2.10 打印kubeconfig信息,默认会使用“REDACTED”或者“DATA-OMITTED”关键字隐藏证书信息,使用–raw选项就可以打印出证书信息
root@master231:~/forest# kubectl config view –kubeconfig=./yangsenlin-k8s.conf
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.231:6443
name: myk8s
contexts:
– context:
cluster: myk8s
user: forest
name: forest@myk8s
– context:
cluster: myk8s
user: yangsenlin
name: yangsenlin@myk8s
current-context: yangsenlin@myk8s
kind: Config
preferences: {}
users:
– name: yangsenlin
user:
token: REDACTED
root@master231:~/forest#
root@master231:~/forest# kubectl config view –kubeconfig=./yangsenlin-k8s.conf –raw
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOREV4TXpBME1Gb1hEVE0xTURRd01qRXhNekEwTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT1hICkI4YU1nK3VBTFhpc2lyaloyUFBoTzY0bENpbzVvT0pMUU5EZzkzbHd4YTduQVEvNG1zY3Q3K2t3NGxEcEV4eFIKRjllUUJIVmdYVys1ZndFcVJDTXZMa2xkRW0xbEQrazNqVHM2ZHJXWEdqUGViayt1QU9Ha01MN3JlYjdpcTlIawpCdXlvUzg5eW13MkRoKzZjK3VOMjNZbWo2RGo0ZXgzUXJUVkg2a0VFeUFHODJyRlN6QzE0NTRhQjVjZ3RDUnM1CndkWVptaEYvUUxTWFBBMURFREZRVjNnYUxnSFRHNVZWQjFWV3MvVHM5YXp5Q2pIOVlwZCtxdXhTaFU1TmVyV3QKQWZ3bnFNdHVqaEV4MmR0bTlMMDVBcEJ2WlE1TURLRnFmNnBCOGhoRlVBdmtaaE5tZzk4V21hdTB0TnIvNkZENgo5TG1hcSszN3RyYkpwa2ZtMkdrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZLT00xSkhFSFN0YmNHS0JkR1Y4Kzh5dHJ0anJNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRHJpZXFFWjJGbTNnWkoyc2RMQgpCOEZ0blMyTjdXTDdydUNxSVNSU3V1cm5MV3hXYkhDQUEvN09tU09uZE9oRG1nQ0tkQXhKY3FOQUlTd1B5NW1SCi95clA1anBYN28yWW8reFVEL3N0bGg5d2ZlRDlaWGhzbzgyVDNNVVUyQnRwMFdrb2h3OUZmYlYyakpUd3pEdUYKSlVCNXUrU0xZUG90dXZ0TStkQXRWY2Rsa0M2RjV4Z1gwdzZPOVJHdDVlYmprNnhWUUZoWmpUWGI0ZGY5Nno1UgpBV2VVc0tHcGRqdW54dzZxR21xYW51T1prUjFsZjMwckx0UVVVeWRYNW5ZMVFzZUQ2TG9ZcDM2KytmbXJjK0ZlCnIzVDhYS0J3UVA0azB3aDljQzNvY3diS2JkR1lESUFHZnBGTEI3ZFBGalFtR0cxMFBTY3p0MzZZK3FJbVlEb0wKY21ZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
– context:
cluster: myk8s
user: forest
name: forest@myk8s
– context:
cluster: myk8s
user: yangsenlin
name: yangsenlin@myk8s
current-context: yangsenlin@myk8s
kind: Config
preferences: {}
users:
– name: yangsenlin
user:
token: =fd6f1d.0b58191d3d726a69
root@master231:~/forest#
6.2.11 将kubeconfig文件拷贝到客户端
root@master231:~/forest# scp yangsenlin-k8s.conf 10.0.0.232:~/forest
root@master231:~/forest# scp yangsenlin-k8s.conf 10.0.0.233:~/forest
6.2.12 客户端进行认证
[root@worker233 ~]# kubectl get pods –kubeconfig=./yangsenlin-k8s.conf
Error from server (Forbidden): pods is forbidden: User “yangsenlin” cannot list resource “pods” in API group “” in the namespace “default”
[root@worker233 ~]#
[root@worker233 ~]# kubectl get pods –kubeconfig=./yangsenlin-k8s.conf –context=forest@myk8s
Error from server (Forbidden): pods is forbidden: User “forest” cannot list resource “pods” in API group “” in the namespace “default”
[root@worker233 ~]#
6.3 为X509数字证书的用户生成kubeconfig
6.3.1 添加证书用户
rest/forest.crt –client-key=/root/forest/forest.key –embed-certs=true –kubeconfig=./yangsenlin-k8s.conf
User “senge” set.
root@master231:~/forest#
6.3.2 查看用户列表
root@master231:~/forest# kubectl config get-users –kubeconfig=./yangsenlin-k8s.conf
NAME
senge
yangsenlin
root@master231:~/forest#
6.3.3 配置上下文
root@master231:~/forest# kubectl config set-context senge@myk8s –user=senge –cluster=myk8s –kubeconfig=./yangsenlin-k8s.conf
Context “senge@myk8s” created.
root@master231:~/forest#
6.3.4 查看上下文列表
root@master231:~/forest# kubectl config get-contexts –kubeconfig=./yangsenlin-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
forest@myk8s myk8s forest
senge@myk8s myk8s senge
* yangsenlin@myk8s myk8s yangsenlin
6.3.5 查看kubeconfig信息
root@master231:~/forest# kubectl config view –kubeconfig=./yangsenlin-k8s.conf
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.231:6443
name: myk8s
contexts:
– context:
cluster: myk8s
user: forest
name: forest@myk8s
– context:
cluster: myk8s
user: senge
name: senge@myk8s
– context:
cluster: myk8s
user: yangsenlin
name: yangsenlin@myk8s
current-context: yangsenlin@myk8s
kind: Config
preferences: {}
users:
– name: senge
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
– name: yangsenlin
user:
token: REDACTED
root@master231:~/forest#
6.3.6 将kubeconfig拷贝到客户端节点
root@master231:~/forest# scp yangsenlin-k8s.conf 10.0.0.232:~/forest
root@master231:~/forest# scp yangsenlin-k8s.conf 10.0.0.233:~/forest
6.3.7 客户端测试验证
[root@worker233 ~]# kubectl get pods –kubeconfig=./yangsenlin-k8s.conf –context=senge@myk8s
Error from server (Forbidden): pods is forbidden: User “forest” cannot list resource “pods” in API group “” in the namespace “default”
[root@worker233 ~]#
7 k8s默认基于sa进行认证
7.1 为何需要service Account
Kubernetes原生(kubernetes-native)托管运行于Kubernetes之上,通常需要直接与API Server进行交互以获取必要的信息。
API Server同样需要对这类来自于Pod资源中客户端程序进行身份验证,Service Account也就是设计专用于这类场景的账号。
ServiceAccount是API Server支持的标准资源类型之一。
– 1.基于资源对象保存ServiceAccount的数据;
– 2.认证信息保存于ServiceAccount对象专用的Secret中(v1.23-版本)
– 3.隶属名称空间级别,专供集群上的Pod中的进程访问API Server时使用;
7.2 Pod使用ServiceAccount方式
在Pod上使用Service Account通常有两种方式:
自动设定:
Service Account通常由API Server自动创建并通过ServiceAccount准入控制器自动关联到集群中创建的每个Pod上。
自定义:
在Pod规范上,使用serviceAccountName指定要使用的特定ServiceAccount。
Kubernetes基于三个组件完成Pod上serviceaccount的自动化,分别对应: ServiceAccount Admission Controller,Token Controller,ServiceAccount Controller。
– ServiceAccount Admission Controller:
API Server准入控制器插件,主要负责完成Pod上的ServiceAccount的自动化。
为每个名称空间自动生成一个”default”的sa,若用户未指定sa,则默认使用”default”。
– Token Controller:
为每一个sa分配一个token的组件,已经集成到Controller manager的组件中。
– ServiceAccount Controller:
为sa生成对应的数据信息,已经集成到Controller manager的组件中。
温馨提示:
需要用到特殊权限时,可为Pod指定要使用的自定义ServiceAccount资源对象
7.3 ServiceAccount Token的不同实现方式
ServiceAccount使用专用的Secret对象(Kubernetes v1.23-)存储相关的敏感信息
– 1.Secret对象的类型标识为“kubernetes.io/service-account-token”
– 2.该Secret对象会自动附带认证到API Server用到的Token,也称为ServiceAccount Token
ServiceAccount Token的不同实现方式
– 1.Kubernetes v1.20-
系统自动生成专用的Secret对象,并基于secret卷插件关联至相关的Pod;
Secret中会自动附带Token且永久有效(安全性低,如果将来获取该token可以长期登录)。
– 2.Kubernetes v1.21-v1.23:
系统自动生成专用的Secret对象,并通过projected卷插件关联至相关的Pod;
Pod不会使用Secret上的Token,被弃用后,在未来版本就不在创建该token。
而是由Kubelet向TokenRequest API请求生成,默认有效期为一年,且每小时更新一次;
– 3.Kubernetes v1.24+:
系统不再自动生成专用的Secret对象。
而是由Kubelet负责向TokenRequest API请求生成Token,默认有效期为一年,且每小时更新一次;
7.4 创建sa并让pod引用指定的sa
root@master231:~/manifests/pods# cat 26-pods-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: yangsenlin
—
apiVersion: v1
kind: Pod
metadata:
name: ysl-pods-sa
spec:
serviceAccountName: yangsenlin
containers:
– name: c1
image: crpi-kxgdi0lp5jdep1gc.cn-chengdu.personal.cr.aliyuncs.com/yangsenlin/apps:v2
root@master231:~/manifests/pods# kubectl apply -f 26-pods-sa.yaml
serviceaccount/yangsenlin created
pod/ysl-pods-sa created
7.5 验证pod使用sa的验证身份
root@master231:~/manifests/pods# kubectl exec -it ysl-pods-sa sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] — [COMMAND] instead.
/ # ll /var/run/secrets/kubernetes.io/serviceaccount/
sh: ll: not found
/ # ls -l /var/run/secrets/kubernetes.io/serviceaccount/
total 0
lrwxrwxrwx 1 root root 13 May 2 02:37 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 May 2 02:37 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 May 2 02:37 token -> ..data/token
/ #
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ #
/ # curl -k -H “Authorization: Bearer ${TOKEN}” https://kubernetes
{
“kind”: “Status”,
“apiVersion”: “v1”,
“metadata”: {},
“status”: “Failure”,
“message”: “forbidden: User \”system:serviceaccount:default:yangsenlin\” cannot get path \”/\””,
“reason”: “Forbidden”,
“details”: {},
“code”: 403
}/ #
7.6 pod基于projected存储卷引用serviceaccount
Kubernetes v1.21+版本中,Pod加载上面三种数据的方式,改变为基于projected卷插件,通过三个数据源(source)分别进行
serviceAccountToken:
提供由Kubelet负责向TokenRequest API请求生成的Token。
configMap:
经由kube-root-ca.crt这个ConfigMap对象的ca.crt键,引用Kubernetes CA的证书
downwardAPI:
基于fieldRef,获取当前Pod所处的名称空间。
8 启用authorization模式
8.1 概述
在kube-apiserver上使用“–authorization-mode”选项进行定义,多个模块彼此间以逗号分隔。
如上图所示,kubeadm部署的集群,默认启用了Node和RBAC。
API Server中的鉴权框架及启用的鉴权模块负责鉴权:
支持的鉴权模块:
Node:
专用的授权模块,它基于kubelet将要运行的Pod向kubelet进行授权。
ABAC:
通过将属性(包括资源属性、用户属性、对象和环境属性等)组合在一起的策略,将访问权限授予用户。
RBAC:
基于企业内个人用户的角色来管理对计算机或网络资源的访问的鉴权方法。
Webhook:
用于支持同Kubernetes外部的授权机制进行集成。
另外两个特殊的鉴权模块是AlwaysDeny和AlwaysAllow。
参考链接:
https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/
9 RBAC基础概念
实体(Entity):
在RBAC也称为Subject,通常指的是User、Group或者是ServiceAccount;
角色(Role):
承载资源操作权限的容器。
资源(Resource):
在RBAC中也称为Object,指代Subject期望操作的目标,例如Service,Deployments,ConfigMap,Secret、Pod等资源。
仅限于”/api/v1/…”及”/apis/
其它路径对应的端点均被视作“非资源类请求(Non-Resource Requests)”,例如”/api”或”/healthz”等端点;
动作(Actions):
Subject可以于Object上执行的特定操作,具体的可用动作取决于Kubernetes的定义。
资源型对象:
只读操作:get、list、watch等。
读写操作:create、update、patch、delete、deletecollection等。
非资源型端点仅支持”get”操作。
角色绑定(Role Binding):
将角色关联至实体上,它能够将角色具体的操作权限赋予给实体。
角色的类型:
Namespace级别:
称为Role,定义名称空间范围内的资源操作权限集合。
Namespace和Cluster级别:
称为ClusterRole,定义集群范围内的资源操作权限集合,包括集群级别及名称空间级别的资源对象。
角色绑定的类型:
Cluster级别:
称为ClusterRoleBinding,可以将实体(User、Group或ServiceAccount)关联至ClusterRole。
Namespace级别:
称为RoleBinding,可以将实体关联至ClusterRole或Role。
即便将Subject使用RoleBinding关联到了ClusterRole上,该角色赋予到Subject的权限也会降级到RoleBinding所属的Namespace范围之内。
10 ClusterRole
启用RBAC鉴权模块时,API Server会自动创建一组ClusterRole和ClusterRoleBinding对象
多数都以“system:”为前缀,也有几个面向用户的ClusterRole未使用该前缀,如cluster-admin、admin等。
它们都默认使用“kubernetes.io/bootstrapping: rbac-defaults”这一标签。
默认的ClusterRole大体可以分为5个类别。
API发现相关的角色:
包括system:basic-user、system:discovery和system:public-info-viewer。
面向用户的角色:
包括cluster-admin、admin、edit和view。
核心组件专用的角色:
包括system:kube-scheduler、system:volume-scheduler、system:kube-controller-manager、system:node和system:node-proxier等。
其它组件专用的角色:
包括system:kube-dns、system:node-bootstrapper、system:node-problem-detector和system:monitoring等。
内置控制器专用的角色:
专为内置的控制器使用的角色,具体可参考官网文档。
11 K8S内置的面向用户的集群角色
cluster-admin:
允许用户在目标范围内的任意资源上执行任意操作;使用ClusterRoleBinding关联至用户时,授权操作集群及所有名称空间中任何资源;使用RoleBinding关联至用户时,授权控制其所属名称空间中的所有资源,包括Namespace资源自身,隶属于”system:masters 组”。
admin:
管理员权限,主要用于结合RoleBinding为特定名称空间快速授权生成管理员用户,它能够将RoleBinding所属名称空间中的大多数资源的读/写权限授予目标用户,包括创建Role和RoleBinding的能力;但不支持对ResourceQuota及Namespace本身进行操作;
edit:
接近于admin的权限,支持对名称空间内的大多数对象进行读/写操作,包括Secret,但不允许查看或修改Role及RoleBinding;
view:
允许以只读方式访问名称空间中的大多数对象,但不包括Role、RoleBinding和Secret;
12 Role授权给一个用户类型
12.1 为授权前测试
[root@worker233 ~]# kubectl –kubeconfig=./yangsenlin-k8s.conf –context=forest@myk8s get pods
Error from server (Forbidden): pods is forbidden: User “forest” cannot list resource “pods” in API group “” in the namespace “default”
[root@worker233 ~]#
12.2 创建Role
[root@master231 rbac]# kubectl create role reader –resource=pods,services –verb=get,watch,list -o yaml –dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: reader
rules:
– apiGroups:
– “”
resources:
– pods
– services
verbs:
– get
– watch
– list
[root@master231 rbac]#
[root@master231 rbac]#
[root@master231 rbac]# kubectl create role reader –resource=pods,services –verb=get,watch,list
role.rbac.authorization.k8s.io/reader created
[root@master231 rbac]#
[root@master231 rbac]#
12.3 创建角色绑定
[root@master231 rbac]# kubectl create rolebinding forest-as-reader –user=forest –role=reader -o yaml –dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: forest-as-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: reader
subjects:
– apiGroup: rbac.authorization.k8s.io
kind: User
name: forest
[root@master231 rbac]#
[root@master231 rbac]# kubectl create rolebinding forest-as-reader –user=forest –role=reader
rolebinding.rbac.authorization.k8s.io/forest-as-reader created
[root@master231 rbac]#
12.4 授权后再次验证
[root@worker233 ~]# kubectl –kubeconfig=./yangsenlin-k8s.conf –context=forest@myk8s get po,svc
NAME READY STATUS RESTARTS AGE
pod/ysl-ysl-xiuxian 1/1 Running 0 175m
pod/ysl-pods-sa 1/1 Running 0 164m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.200.0.1
[root@worker233 ~]#
[root@worker233 ~]# kubectl –kubeconfig=./yangsenlin-k8s.conf –context=forest@myk8s get cm
Error from server (Forbidden): configmaps is forbidden: User “forest” cannot list resource “configmaps” in API group “” in the namespace “default”
[root@worker233 ~]#
13 ClusterRole授权给一个用户组类型
13.1 创建集群角色
root@master231:~/forest# kubectl create clusterrole reader –resource=configmaps,pods,sc,pv,pvc –verb=get,watch,list -o yaml –dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: reader
rules:
– apiGroups:
– “”
resources:
– configmaps
– pods
– persistentvolumes
– persistentvolumeclaims
verbs:
– get
– watch
– list
– apiGroups:
– storage.k8s.io
resources:
– storageclasses
verbs:
– get
– watch
– list
root@master231:~/forest#
root@master231:~/forest# kubectl create clusterrole reader –resource=configmaps,pods,sc,pv,pvc –verb=get,watch,list
clusterrole.rbac.authorization.k8s.io/reader created
root@master231:~/forest#
13.2 将集群角色绑定给k3s组
root@master231:~/forest# kubectl create clusterrolebinding k8s-as-reader –clusterrole=reader –group=k8s
clusterrolebinding.rbac.authorization.k8s.io/k8s-as-reader created
root@master231:~/forest#
13.3 测试验证
13.3.1 基于kubeconfig测试
基于kubeconfig测试
[root@worker233 ~]# kubectl –kubeconfig=./yangsenlin-k8s.conf get po,svc
NAME READY STATUS RESTARTS AGE
ysl-ysl-xiuxian 1/1 Running 0 3h5m
ysl-pods-sa 1/1 Running 0 175m
Error from server (Forbidden): services is forbidden: User “yangsenlin” cannot list resource “services” in API group “” in the namespace “default”
[root@worker233 ~]#
[root@worker233 ~]#
[root@worker233 ~]# kubectl –kubeconfig=./yangsenlin-k8s.conf get po,cm
NAME READY STATUS RESTARTS AGE
pod/ysl-ysl-xiuxian 1/1 Running 0 3h6m
pod/ysl-pods-sa 1/1 Running 0 175m
NAME DATA AGE
configmap/game-demo 7 9d
configmap/haha 2 8d
configmap/kube-root-ca.crt 1 13d
configmap/nginx-subfile 2 9d
configmap/ysl-ysl 4 9d
configmap/yangsenlin-c
13.3.2 基于token测试
[root@worker233 ~]# kubectl –server=https://10.0.0.231:6443 –token=ca43c1.a0f3fe638b9b80d3 –certificate-authority=/etc/kubernetes/pki/ca.crt get po,cm
NAME READY STATUS RESTARTS AGE
pod/ysl-ysl-xiuxian 1/1 Running 0 3h8m
pod/ysl-pods-sa 1/1 Running 0 177m
NAME DATA AGE
configmap/game-demo 7 9d
configmap/haha 2 8d
configmap/kube-root-ca.crt 1 13d
configmap/nginx-subfile 2 9d
configmap/ysl-ysl 4 9d
configmap/yangsenlin-cm 2 8d
[root@worker233 ~]#
[root@worker233 ~]#
[root@worker233 ~]# kubectl –server=https://10.0.0.231:6443 –token=ca43c1.a0f3fe638b9b80d3 –certificate-authority=/etc/kubernetes/pki/ca.crt get nodes
Error from server (Forbidden): nodes is forbidden: User “forest” cannot list resource “nodes” in API group “” at the cluster scope
[root@worker233 ~]#
14 clusterrole授权给一个serviceaccount类型
14.1 安装依赖包
pip install kubernetes -i https://pypi.tuna.tsinghua.edu.cn/simple/
14.2 编写python脚本
cat > view-k8s-resources.py <
ysl-pods-sa 1/1 Running 0 3h18m 10.100.2.247 worker233
xiuxian-6dffdd86b-hstrw 1/1 Running 0 28s 10.100.2.12 worker233
[root@master231 rbac]#
[root@master231 rbac]# kubectl exec -it xiuxian-6dffdd86b-hstrw — sh
/ # python3 view-k8s-resources.py
###### Deployment列表 ######
xiuxian
###### Pod列表 ######
ysl-ysl-xiuxian
ysl-pods-sa
xiuxian-6dffdd86b-hstrw
/ #
15 kubeconfig加载的优先级
使用‘–kubeconfig’ > 环境变量‘export KUBECONFIG=/root/yangsenlin-k8s.conf’ > 默认加载路径 ‘~/.kube/config’
15 验证“/root/.kube/config”文件默认的集群角色权限
15.1 导入证书
root@master231:~/forest# kubectl config view –raw -o jsonpath='{.users[0].user.client-certificate-data}’ | base64 -d > /opt/admin.crt
root@master231:~/forest# ll /opt/admin.crt
-rw-r–r– 1 root root 1147 May 2 11:02 /opt/admin.crt
root@master231:~/forest#
15.2 查看证书信息
root@master231:~/forest# openssl x509 -noout -text -in /opt/admin.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4527849109897624662 (0x3ed626c6a0043c56)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Apr 4 11:30:40 2025 GMT
Not After : Apr 4 11:30:41 2026 GMT
Subject: O = system:masters, CN = kubernetes-admin #这里是用户和组
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
….
root@master231:~/forest#
15.3 查看内置集群角色
root@master231:~/forest# kubectl get clusterrolebindings cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: “true”
creationTimestamp: “2025-04-04T11:30:48Z”
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: “148”
uid: 0d1acdfc-f91b-46d6-b8ee-caf8f2d56249
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
root@master231:~/forest#
16 metrics-server实现HPA
部署文档
https://github.com/kubernetes-sigs/metrics-server
[root@master231 add-ons]# cd metrics-server
[root@master231 metrics-server]# ll
total 8
drwxr-xr-x 2 root root 4096 Feb 23 16:36 ./
drwxr-xr-x 8 root root 4096 Feb 23 16:36 ../
[root@master231 metrics-server]#
[root@master231 metrics-server]# wget http://192.168.15.253/Resources/Kubernetes/Add-ons/metrics-server/high-availability-1.21%2B.yaml
[root@master231 metrics-server]# kubectl apply -f high-availability-1.21+.yaml
[root@master231 metrics-server]# kubectl get pods -o wide -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-dfb9648d6-55st9 1/1 Running 0 66s 10.100.2.19 worker233
metrics-server-dfb9648d6-nzgkx 1/1 Running 0 66s 10.100.1.145 worker232
[root@master231 metrics-server]#
镜像下载地址:
http://192.168.15.253/Resources/Kubernetes/Add-ons/metrics-server/
16.1 验证metrics组件是否正常工作
[root@master231 rbac]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master231 160m 8% 1925Mi 24%
worker232 179m 8% 2938Mi 37%
worker233 176m 8% 2583Mi 33%
[root@master231 rbac]#
[root@master231 rbac]# kubectl top pod
NAME CPU(cores) MEMORY(bytes)
ysl-ysl-xiuxian 0m 3Mi
ysl-pods-sa 0m 4Mi
xiuxian-6dffdd86b-hstrw 0m 13Mi
[root@master231 rbac]#
[root@master231 rbac]# kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-6d8c4cb4d-2z2j8 1m 18Mi
coredns-6d8c4cb4d-jhnpg 1m 35Mi
csi-nfs-controller-5c5c695fb-6psv8 0m 34Mi
csi-nfs-node-bsmr7 1m 54Mi
csi-nfs-node-ghtvt 0m 35Mi
csi-nfs-node-s4dm5 2m 51Mi
etcd-master231 14m 90Mi
kube-apiserver-master231 83m 564Mi
kube-controller-manager-master231 9m 81Mi
kube-proxy-b855f 3m 29Mi
kube-proxy-qxhhp 3m 20Mi
kube-proxy-xlmtw 6m 36Mi
kube-scheduler-master231 1m 20Mi
[root@master231 rbac]#
16.2 验证hpa
[root@master231 horizontalpodautoscalers]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
xiuxian 1/1 1 1 56m
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]# kubectl autoscale deploy xiuxian –min=2 –max=5 –cpu-percent=95 -o yaml –dry-run=client
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
creationTimestamp: null
name: xiuxian
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xiuxian
targetCPUUtilizationPercentage: 95
status:
currentReplicas: 0
desiredReplicas: 0
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]# kubectl autoscale deploy xiuxian –min=2 –max=5 –cpu-percent=95 -o yaml –dry-run=client >> 01-deploy-hpa.yaml
16.3 验证hpa
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]# cat 01-deploy-hpa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stress
spec:
replicas: 1
selector:
matchLabels:
app: stress
template:
metadata:
labels:
app: stress
spec:
containers:
– image: forest2020/ysl-linux-tools:v0.1
name: ysl-linux-tools
args:
– tail
– -f
– /etc/hosts
resources:
requests:
cpu: 0.2
memory: 300Mi
limits:
cpu: 0.5
memory: 500Mi
—
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: stress-hpa
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: stress
targetCPUUtilizationPercentage: 95
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]# kubectl apply -f 01-deploy-hpa.yaml
deployment.apps/stress created
horizontalpodautoscaler.autoscaling/stress-hpa created
[root@master231 horizontalpodautoscalers]#
[root@master231 horizontalpodautoscalers]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6twg5 1/1 Running 0 22s 10.100.2.17 worker233
stress-5585b5ccc-7nbhc 1/1 Running 0 37s 10.100.1.144 worker232
[root@master231 horizontalpodautoscalers]#
16.4 验证hpa
[root@master231 metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 125%/95% 2 5 4 7m56s
[root@master231 metrics-server]#
[root@master231 metrics-server]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6twg5 1/1 Running 0 9m55s 10.100.2.17 worker233
stress-5585b5ccc-b5qxr 1/1 Running 0 4m55s 10.100.2.18 worker233
stress-5585b5ccc-ccc2v 1/1 Running 0 4m55s 10.100.1.147 worker232
stress-5585b5ccc-czhg8 1/1 Running 0 4m6s 10.100.1.146 worker232
[root@master231 metrics-server]#
[root@master231 metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 63%/95% 2 5 4 11m
[root@master231 metrics-server]#
16.5 再次压测
[root@master231 ~]# kubectl exec stress-5585b5ccc-6twg5 — stress –cpu 8 –io 4 –vm 2 –vm-bytes 128M –timeout 10m
stress: info: [44] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
[root@master231 ~]# kubectl exec stress-5585b5ccc-b5qxr — stress –cpu 8 –io 4 –vm 2 –vm-bytes 128M –timeout 10m
stress: info: [7] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
[root@master231 ~]# kubectl exec stress-5585b5ccc-ccc2v — stress –cpu 8 –io 4 –vm 2 –vm-bytes 128M –timeout 10m
stress: info: [7] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
[root@master231 ~]# kubectl exec stress-5585b5ccc-czhg8 — stress –cpu 8 –io 4 –vm 2 –vm-bytes 128M –timeout 10m
stress: info: [7] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
16.6 发现最多有5个Pod创建
[root@master231 metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 163%/95% 2 5 5 12m
[root@master231 metrics-server]#
[root@master231 metrics-server]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6twg5 1/1 Running 0 12m 10.100.2.17 worker233
stress-5585b5ccc-b5qxr 1/1 Running 0 7m48s 10.100.2.18 worker233
stress-5585b5ccc-ccc2v 1/1 Running 0 7m48s 10.100.1.147 worker232
stress-5585b5ccc-czhg8 1/1 Running 0 6m59s 10.100.1.146 worker232
stress-5585b5ccc-z4l95 1/1 Running 0 32s 10.100.1.148 worker232
[root@master231 metrics-server]#
[root@master231 metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 174%/95% 2 5 5 13m
[root@master231 metrics-server]#
[root@master231 metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 200%/95% 2 5 0 5 13m
[root@master231 metrics-server]#
16.7取消压测后
需要等待5min会自动缩容Pod数量到2个。
17 hpa和vpa的区别?
– hpa:
表示Pod数量资源不足时,可以自动增加Pod副本数量,以抵抗流量过多的情况,降低负载。
– vpa:
表示可以动态调整容器的资源上线,比如一个Pod一开始是200Mi内存,如果资源达到定义的阈值,就可以扩展内存,但不会增加pod副本数量。
典型的区别在于vpa具有一定的资源上限问题,因为pod是K8S集群调度的最小单元,不可拆分,因此这个将来扩容时,取决于单节点的资源上限。
51 对接ceph
1基于Rook方式快速部署ceph集群
其他部署方式,推荐阅读:
https://rook.io/docs/rook/v1.13/Getting-Started/quickstart/#deploy-the-rook-operator
1.1 Rook概述
Rook是一个开源的云原生存储编排器,为Ceph存储提供平台、框架和支持,以便与云原生环境进行原生集成。
Ceph是一个分布式存储系统,提供文件、块和对象存储,部署在大规模生产集群中。
Rook自动化了Ceph的部署和管理,以提供自我管理、自我扩展和自我修复的存储服务。Rook操作员通过构建Kubernetes资源来部署、配置、配置、扩展、升级和监控Ceph来实现这一点。
Ceph运营商于2018年12月在Rook v0.9版本中宣布稳定,提供了多年的生产存储平台。Rook由云原生计算基金会(CNCF)托管,是一个毕业级项目。
Rook是用Golang实现的,ceph是用C++实现的,其中数据路径经过高度优化。
简而言之,Rook是一个自管理的分布式存储编排系统,可以为kubernetes提供便利的存储解决方案,Rook本身并不提供存储,而是kubernetes和存储之间提供适配层,简化存储系统的部署和维护工作。目前主要支持存储系统包括但不限于Ceph,Cassandra,NFS等。
从本质上来讲,Rook是一个可以提供ceph集群管理能力的Operator,Rook使用CRD一个控制器来对Ceph之类的资源进行部署和管理。
官网链接:
https://rook.io/
github地址:
https://github.com/rook/rook
1.2 Rook和K8S版本对应关系
我的K8S 1.23.17最高能使用的Rook版本为v1.13。
参考链接:
https://rook.io/docs/rook/v1.13/Getting-Started/Prerequisites/prerequisites/
1.3 环境准备
每个K8S节点增加2~3个块设备文件,分别对应300GB,500GB,1024GB,并重启操作系统。
root@master231:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
…
sdb 8:16 0 300G 0 disk
sdc 8:32 0 500G 0 disk
sdd 8:48 0 1T 0 disk
root@master232:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
…
sdb 8:16 0 300G 0 disk
sdc 8:32 0 500G 0 disk
sdd 8:48 0 1T 0 disk
root@master233:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
…
sdb 8:16 0 300G 0 disk
sdc 8:32 0 500G 0 disk
sdd 8:48 0 1T 0 disk
1.4 下载指定版本Rook
root@master231:~# wget https://github.com/rook/rook/archive/refs/tags/v1.13.10.tar.gz
1.5 解压软件包
root@master231:~/manifests/preject/rook# tar xf v1.13.10.tar.gz
1.6 取消master污点
root@master231:~/manifests/preject/rook# kubectl describe nodes | grep -i taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints:
Taints:
root@master231:~/manifests/preject/rook# kubectl taint node master231 node-role.kubernetes.io/master:NoSchedule-
node/master231 untainted
root@master231:~/manifests/preject/rook# kubectl describe nodes | grep -i taints
Taints:
Taints:
Taints:
root@master231:~/manifests/preject/rook#
1.7 创建Rook
root@master231:~/manifests/preject/rook# cd rook-1.13.10/deploy/examples/
1.8 部署Ceph
[root@master231 examples]# kubectl apply -f cluster.yaml
1.9 部署Rook Ceph工具
[root@master231 examples]# kubectl apply -f toolbox.yaml
1.10 部署CephUI
[root@master231 examples]# kubectl apply -f dashboard-external-https.yaml
1.11 查看Pod列表
[root@master231 examples]# kubectl get pods,svc -n rook-ceph
1.12 查看ceph dashboard的登录密码
kubectl -n rook-ceph get secrets rook-ceph-dashboard-password -o jsonpath='{.data.password}’ | base64 -d ;echo
1.13 访问Ceph的WebUI
[root@master231 examples]# kubectl -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-https NodePort 10.200.3.34
rook-ceph-mon-a ClusterIP 10.200.104.17
rook-ceph-mon-b ClusterIP 10.200.12.187
rook-ceph-mon-c ClusterIP 10.200.33.54
[root@master231 examples]#
https://10.0.0.233:24622/
用户名为: admin
密码: Hx46\Il]{L30|n#^_`c3
1.14 查看OSD列表信息
https://10.0.0.233:24622/#/osd
2 k8s对接ceph
2.1 ceph集群创建存储池
[root@master231 examples]# kubectl get pods -n rook-ceph -l app=rook-ceph-tools -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-tools-5846d4dc6c-lrgm8 1/1 Running 0 3h23m 10.100.1.175 worker232
[root@master231 examples]#
[root@master231 examples]#
[root@master231 examples]# kubectl -n rook-ceph exec -it rook-ceph-tools-5846d4dc6c-lrgm8 — bash
bash-4.4$
bash-4.4
bash-4.4$ ceph osd pool create ysl
pool ‘ysl’ created
bash-4.4$
bash-4.4$ ceph osd pool application enable ysl rbd
enabled application ‘rbd’ on pool ‘ysl’
bash-4.4$
2.在指定的存储池创建块设备
bash-4.4$ rbd create -s 2048 ysl -p ysl
bash-4.4$
bash-4.4$ rbd ls -p ysl
ysl
bash-4.4$
bash-4.4$ rbd info ysl/ysl
rbd image ‘ysl’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 315452708a887
block_name_prefix: rbd_data.315452708a887
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon Feb 24 06:42:32 2025
access_timestamp: Mon Feb 24 06:42:32 2025
modify_timestamp: Mon Feb 24 06:42:32 2025
bash-4.4$
3.所有节点安装ceph的客户端模块
[root@master231 ~]# apt -y install ceph-common
[root@worker232 ~]# apt -y install ceph-common
[root@worker233 ~]# apt -y install ceph-common
4.将客户端证书文件拷贝到K8S集群节点
[root@master231 volumes]# kubectl get pods -n rook-ceph -l app=rook-ceph-tools
NAME READY STATUS RESTARTS AGE
rook-ceph-tools-5846d4dc6c-lrgm8 1/1 Running 0 3h47m
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# kubectl -n rook-ceph cp rook-ceph-tools-5846d4dc6c-lrgm8:/etc/ceph/keyring /etc/ceph/keyring
tar: Removing leading `/’ from member names
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# ll /etc/ceph/keyring
-rw-r–r– 1 root root 62 Feb 24 15:03 /etc/ceph/keyring
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# scp /etc/ceph/keyring 10.0.0.232:/etc/ceph/
root@10.0.0.232’s password:
keyring 100% 62 168.0KB/s 00:00
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# scp /etc/ceph/keyring 10.0.0.233:/etc/ceph/
root@10.0.0.233’s password:
keyring 100% 62 130.3KB/s 00:00
[root@master231 volumes]#
5.编写资源清单k8s对接块设备存储卷
[root@master231 volumes]# cat 01-deploy-volumes-rbd.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian-volume-rbd
spec:
replicas: 1
selector:
matchLabels:
app: xixi
template:
metadata:
labels:
app: xixi
spec:
volumes:
– name: data
rbd:
# 指定ceph集群地址,貌似不支持svc的名称解析,暂时写为svc的地址
monitors:
#- rook-ceph-mon-a.rook-ceph.svc.ysl.com:6789
#- rook-ceph-mon-b.rook-ceph.svc.ysl.com:6789
– 10.200.104.17:6789
– 10.200.12.187:6789
# 指定存储池
pool: ysl
# 指定块设备的名称
image: ysl
# 指定文件系统,支持”ext4″, “xfs”, “ntfs”类型,若不指定,则默认值为ext4.
fsType: xfs
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
initContainers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: init
volumeMounts:
– name: data
mountPath: /ysl
command:
– /bin/sh
– -c
– echo www.ysl.com > /ysl/index.html
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-volume-rbd-54d875d5fc-smfkt 1/1 Running 0 2m23s 10.100.0.74 master231
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.0.74
www.ysl.com
[root@master231 volumes]#
– k8s基于secret存储ceph的认证信息
1.k8s所有worker移除认证信息
[root@master231 ~]# rm -f /etc/ceph/keyring
[root@worker232 ~]# rm -f /etc/ceph/keyring
[root@worker233 ~]# rm -f /etc/ceph/keyring
2.删除Pod之后将无法认证
[root@master231 volumes]# kubectl delete pods xiuxian-volume-rbd-54d875d5fc-smfkt
pod “xiuxian-volume-rbd-54d875d5fc-smfkt” deleted
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-volume-rbd-54d875d5fc-nj46n 0/1 Init:0/1 0 4s
[root@master231 volumes]#
[root@master231 volumes]# kubectl describe pod xiuxian-volume-rbd-54d875d5fc-nj46n
Name: xiuxian-volume-rbd-54d875d5fc-nj46n
Namespace: default
…
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal Scheduled 13s default-scheduler Successfully assigned default/xiuxian-volume-rbd-54d875d5fc-nj46n to worker232
Normal SuccessfulAttachVolume 13s attachdetach-controller AttachVolume.Attach succeeded for volume “data”
Warning FailedMount 7s kubelet MountVolume.WaitForAttach failed for volume “data” : fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 Errors while parsing config file!
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 can’t open ceph.conf: (2) No such file or directory
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 Errors while parsing config file!
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 can’t open ceph.conf: (2) No such file or directory
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2025-02-24T15:16:11.391+0800 7f1a4d6534c0 -1 AuthRegistry(0x55813c13f848) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
2025-02-24T15:16:11.395+0800 7f1a4d6534c0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2025-02-24T15:16:11.395+0800 7f1a4d6534c0 -1 AuthRegistry(0x7ffcee347800) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
2025-02-24T15:16:11.403+0800 7f1a4d6534c0 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
rbd: couldn’t connect to the cluster!
)
3.将认证信息基于secret存储
[root@master231 volumes]# cat 02-deploy-volumes-rbd-secretRef.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin
type: “kubernetes.io/rbd”
stringData:
# 指定登录的认证信息,换成你ceph集群的key即可。
key: AQBL5btnoGXFDxAA2ojiKwgfI4dI42MCV70qnQ==
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian-volume-rbd-secretref
spec:
replicas: 1
selector:
matchLabels:
app: xixi
template:
metadata:
labels:
app: xixi
spec:
volumes:
– name: data
rbd:
# 指定ceph集群地址,貌似不支持svc的名称解析,暂时写为Pod的地址
monitors:
#- rook-ceph-mon-a.rook-ceph.svc.ysl.com:6789
#- rook-ceph-mon-b.rook-ceph.svc.ysl.com:6789
– 10.200.104.17:6789
– 10.200.12.187:6789
# 指定存储池
pool: ysl
# 指定块设备的名称
image: ysl
# 指定文件系统,支持”ext4″, “xfs”, “ntfs”类型,若不指定,则默认值为ext4.
fsType: xfs
# 指定连接集群的用户名,默认为admin
user: admin
# 一旦定义了secretRef选项,将覆盖keyring的配置,表示不在读取worker节点的认证信息。
secretRef:
name: ceph-admin
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 volumes]#
[root@master231 volumes]# kubectl apply -f 02-deploy-volumes-rbd-secretRef.yaml
secret/ceph-admin created
deployment.apps/xiuxian-volume-rbd-secretref created
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-volume-rbd-secretref-64bd99b4b7-s6dlz 1/1 Running 0 17s 10.100.0.75 master231
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.0.75
www.ysl.com
[root@master231 volumes]#
– ceph的Dashboard实现块设备管理
略
– 操作etcd解决k8s资源一直处于Terminating的方案
1.问题复现”rook-ceph”名称空间一直处于”Terminating”
[root@master231 /server/kubernetes/ceph/rook-1.13.10/deploy/examples]# kubectl get ns
NAME STATUS AGE
default Active 14d
elastic-system Active 6d4h
ingress-nginx Active 4d
istio-system Active 3d6h
kube-flannel Active 14d
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
kubesphere-controls-system Active 7d17h
kubesphere-system Active 7d17h
metallb-system Active 4d23h
monitoring Active 4d23h
rook-ceph Terminating 4h15m
yangsenlin Active 3d5h
[root@master231 /server/kubernetes/ceph/rook-1.13.10/deploy/examples]#
2.安装etcd的客户端
apt -y install etcd-client
3.查看数据
export ETCDCTL_API=3
etcdctl –endpoints=”10.0.0.231:2379″ –cacert=/etc/kubernetes/pki/etcd/ca.crt –cert=/etc/kubernetes/pki/etcd/server.crt –key=/etc/kubernetes/pki/etcd/server.key get /registry/namespaces/rook-ceph –prefix –keys-only
4.删除数据
[root@master231 ~]# etcdctl –endpoints=”10.0.0.231:2379″ –cacert=/etc/kubernetes/pki/etcd/ca.crt –cert=/etc/kubernetes/pki/etcd/server.crt –key=/etc/kubernetes/pki/etcd/server.key del /registry/namespaces/rook-ceph –prefix
1
[root@master231 ~]#
5.再次验证
[root@master231 /server/kubernetes/ceph/rook-1.13.10/deploy/examples]# kubectl get ns
NAME STATUS AGE
default Active 14d
elastic-system Active 6d4h
ingress-nginx Active 4d
istio-system Active 3d6h
kube-flannel Active 14d
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
kubesphere-controls-system Active 7d17h
kubesphere-system Active 7d17h
metallb-system Active 4d23h
monitoring Active 4d23h
yangsenlin Active 3d5h
[root@master231 /server/kubernetes/ceph/rook-1.13.10/deploy/examples]#
– k8s对接cephFS文件系统
参考链接:
https://rook.io/docs/rook/latest/Getting-Started/example-configurations/#shared-filesystem
1.创建元数据存储池和数据存储池
bash-4.4$ ceph osd pool create ysl-cephfs-metadata
pool ‘ysl-cephfs-metadata’ created
bash-4.4$
bash-4.4$ ceph osd pool create ysl-cephfs-data
pool ‘ysl-cephfs-data’ created
bash-4.4$
2.查看ceph集群现有的文件系统
bash-4.4$ ceph fs ls
No filesystems enabled
bash-4.4$
3.创建cephFS实例
bash-4.4$ ceph fs new ysl-ysl-cephfs ysl-cephfs-metadata ysl-cephfs-data
Pool ‘ysl-cephfs-data’ (id ‘5’) has pg autoscale mode ‘on’ but is not marked as bulk.
Consider setting the flag by running
# ceph osd pool set ysl-cephfs-data bulk true
new fs with metadata pool 4 and data pool 5
bash-4.4$
4.再次查看ceph集群现有的文件系统
bash-4.4$ ceph fs ls
name: ysl-ysl-cephfs, metadata pool: ysl-cephfs-metadata, data pools: [ysl-cephfs-data ]
bash-4.4$
5.创建文件系统
[root@master231 examples]# pwd
/root/cloud-computing-stack/ysl/kubernetes/projects/03-rook/rook-1.13.10/deploy/examples
[root@master231 examples]#
[root@master231 examples]# kubectl apply -f filesystem-ec.yaml
cephfilesystem.ceph.rook.io/myfs-ec created
cephfilesystemsubvolumegroup.ceph.rook.io/myfs-csi created
[root@master231 examples]#
[root@master231 ~]# kubectl get pods -o wide -n rook-ceph # 观察是否有mds相关组件。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-d8rxl 2/2 Running 0 5h23m 10.0.0.232 worker232
csi-cephfsplugin-ffms9 2/2 Running 1 (5h23m ago) 5h23m 10.0.0.233 worker233
csi-cephfsplugin-g4492 2/2 Running 0 4h53m 10.0.0.231 master231
csi-cephfsplugin-provisioner-675f8d446d-2tvj5 5/5 Running 1 (5h22m ago) 5h23m 10.100.1.172 worker232
csi-cephfsplugin-provisioner-675f8d446d-tqzjr 5/5 Running 1 (5h22m ago) 5h23m 10.100.2.35 worker233
csi-rbdplugin-9w498 2/2 Running 0 5h23m 10.0.0.232 worker232
csi-rbdplugin-fkktw 2/2 Running 0 4h53m 10.0.0.231 master231
csi-rbdplugin-j5kfd 2/2 Running 1 (5h22m ago) 5h23m 10.0.0.233 worker233
csi-rbdplugin-provisioner-dfc566599-hdpm9 5/5 Running 3 (5h22m ago) 5h23m 10.100.1.171 worker232
csi-rbdplugin-provisioner-dfc566599-snwfj 5/5 Running 1 (5h22m ago) 5h23m 10.100.2.36 worker233
rook-ceph-crashcollector-master231-84d789dc55-rr5lc 1/1 Running 0 3m36s 10.100.0.79 master231
rook-ceph-crashcollector-worker232-5b88cb5bc8-mmkxs 1/1 Running 0 3m34s 10.100.1.193 worker232
rook-ceph-crashcollector-worker233-5bf9645587-nkp54 1/1 Running 0 5h7m 10.100.2.46 worker233
rook-ceph-exporter-master231-67fff8cddb-8vtgb 1/1 Running 0 3m32s 10.100.0.80 master231
rook-ceph-exporter-worker232-5d8f7c4b55-tc4l4 1/1 Running 0 3m31s 10.100.1.194 worker232
rook-ceph-exporter-worker233-8654b9d9c4-slhdk 1/1 Running 0 5h7m 10.100.2.48 worker233
rook-ceph-mds-myfs-ec-a-5869f5c795-w5rbh 2/2 Running 0 3m36s 10.100.0.78 master231
rook-ceph-mds-myfs-ec-b-c4d98d477-knm8z 2/2 Running 0 3m35s 10.100.1.192 worker232
rook-ceph-mgr-a-75b8597954-w6825 3/3 Running 0 5h10m 10.100.1.182 worker232
rook-ceph-mgr-b-59d4b976c7-gk2bb 3/3 Running 0 5h10m 10.100.2.41 worker233
rook-ceph-mon-a-74679c68bb-pmq69 2/2 Running 0 5h15m 10.100.1.180 worker232
rook-ceph-mon-b-7c44758ffd-jxsjx 2/2 Running 0 5h15m 10.100.2.40 worker233
rook-ceph-mon-d-67f489d656-qzspd 2/2 Running 0 5h 10.100.0.73 master231
rook-ceph-operator-5f54cbd997-hq2kb 1/1 Running 0 5h28m 10.100.1.168 worker232
rook-ceph-osd-0-5fdf78969c-x6l8v 2/2 Running 0 5h7m 10.100.1.187 worker232
rook-ceph-osd-1-6cb84f5c66-ldfsd 2/2 Running 0 5h7m 10.100.2.45 worker233
rook-ceph-osd-2-7466b8f64d-4ctf4 2/2 Running 0 5h7m 10.100.2.47 worker233
rook-ceph-osd-3-76dcff98b4-z6dc5 2/2 Running 0 5h7m 10.100.1.186 worker232
rook-ceph-osd-prepare-worker232-9v5zx 0/1 Completed 0 5h7m 10.100.1.191 worker232
rook-ceph-osd-prepare-worker233-gmgg2 0/1 Completed 0 5h7m 10.100.2.49 worker233
rook-ceph-tools-5846d4dc6c-lrgm8 1/1 Running 0 5h28m 10.100.1.175 worker232
[root@master231 ~]#
6.查看cephFS的状态信息
bash-4.4$ ceph fs status ysl-ysl-cephfs
ysl-ysl-cephfs – 0 clients
========================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active myfs-ec-a Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
ysl-cephfs-metadata metadata 64.0k 759G
ysl-cephfs-data data 0 506G
MDS version: ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)
bash-4.4$
7.测试验证
[root@master231 volumes]# cat 03-deploy-volumes-cephfs-secretRef.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin
type: “kubernetes.io/rbd”
stringData:
# 指定登录的认证信息,换成你ceph集群的key即可。
key: AQBL5btnoGXFDxAA2ojiKwgfI4dI42MCV70qnQ==
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian-volume-cephfs-secretref
spec:
replicas: 3
selector:
matchLabels:
app: xixi
template:
metadata:
labels:
app: xixi
spec:
volumes:
– name: data
# 指定存储卷的类型
cephfs:
# 指定共享的文件路径
path: /
monitors:
– 10.200.104.17:6789
– 10.200.12.187:6789
user: admin
secretRef:
name: ceph-admin
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-volume-cephfs-secretref-bb4cd455-9wmrc 1/1 Running 0 7s 10.100.0.81 master231
xiuxian-volume-rbd-secretref-5894f6b98c-m79ph 1/1 Running 0 84m 10.100.0.76 master231
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.0.81
403 Forbidden
[root@master231 volumes]#
8.修改数据
[root@master231 volumes]# kubectl exec -it xiuxian-volume-cephfs-secretref-bb4cd455-9wmrc — sh
/ # echo https://www.ysl.com > /usr/share/nginx/html/index.html
/ #
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.0.81
https://www.ysl.com
[root@master231 volumes]#
9.再次测试验证
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-volume-cephfs-secretref-bb4cd455-9fjg2 1/1 Running 0 4s 10.100.1.195 worker232
xiuxian-volume-cephfs-secretref-bb4cd455-9wmrc 1/1 Running 0 77s 10.100.0.81 master231
xiuxian-volume-cephfs-secretref-bb4cd455-cshzm 1/1 Running 0 4s 10.100.2.50 worker233
xiuxian-volume-rbd-secretref-5894f6b98c-m79ph 1/1 Running 0 85m 10.100.0.76 master231
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.1.195
https://www.ysl.com
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.2.50
https://www.ysl.com
[root@master231 volumes]#
[root@master231 volumes]# curl 10.100.0.81
https://www.ysl.com
[root@master231 volumes]#
– K8S的csi对接ceph的rbd动态存储类
1.安装rbd的sc
[root@master231 examples]# cd csi/rbd/
[root@master231 rbd]#
[root@master231 rbd]# ll
total 52
drwxrwxr-x 2 root root 4096 Jul 4 2024 ./
drwxrwxr-x 5 root root 4096 Jul 4 2024 ../
-rw-rw-r– 1 root root 489 Jul 4 2024 pod-ephemeral.yaml
-rw-rw-r– 1 root root 315 Jul 4 2024 pod.yaml
-rw-rw-r– 1 root root 266 Jul 4 2024 pvc-clone.yaml
-rw-rw-r– 1 root root 308 Jul 4 2024 pvc-restore.yaml
-rw-rw-r– 1 root root 196 Jul 4 2024 pvc.yaml
-rw-rw-r– 1 root root 578 Jul 4 2024 snapshotclass.yaml
-rw-rw-r– 1 root root 205 Jul 4 2024 snapshot.yaml
-rw-rw-r– 1 root root 3984 Jul 4 2024 storageclass-ec.yaml
-rw-rw-r– 1 root root 2441 Jul 4 2024 storageclass-test.yaml
-rw-rw-r– 1 root root 4278 Jul 4 2024 storageclass.yaml
[root@master231 rbd]#
[root@master231 rbd]# kubectl apply -f storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created
[root@master231 rbd]#
[root@master231 rbd]# kubectl get sc rook-ceph-block
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 26s
[root@master231 rbd]#
2.创建pvc
[root@master231 rbd]# cat pvc.yaml
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: rook-ceph-block
[root@master231 rbd]#
[root@master231 rbd]#
[root@master231 rbd]#
[root@master231 rbd]# kubectl apply -f pvc.yaml
persistentvolumeclaim/rbd-pvc created
[root@master231 rbd]#
[root@master231 rbd]# kubectl get pvc rbd-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-1f6bc1b8-1bef-4df7-a101-0ca6a2770fdb 1Gi RWO rook-ceph-block 21s
[root@master231 rbd]#
[root@master231 rbd]#
[root@master231 rbd]# kubectl describe pv pvc-1f6bc1b8-1bef-4df7-a101-0ca6a2770fdb
Name: pvc-1f6bc1b8-1bef-4df7-a101-0ca6a2770fdb
Labels:
Annotations: pv.kubernetes.io/provisioned-by: rook-ceph.rbd.csi.ceph.com
volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-rbd-provisioner
volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph
Finalizers: [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection]
StorageClass: rook-ceph-block
Status: Bound
Claim: default/rbd-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: rook-ceph.rbd.csi.ceph.com
FSType: ext4
VolumeHandle: 0001-0009-rook-ceph-000000000000000b-6a5f560e-98a7-4808-83dc-bcd01b0f6dda
ReadOnly: false
VolumeAttributes: clusterID=rook-ceph
imageFeatures=layering
imageFormat=2
imageName=csi-vol-6a5f560e-98a7-4808-83dc-bcd01b0f6dda
journalPool=replicapool
pool=replicapool
storage.kubernetes.io/csiProvisionerIdentity=1740367265681-724-rook-ceph.rbd.csi.ceph.com
Events:
[root@master231 rbd]#
3.测试验证
[root@master231 volumes]# cat 04-deploy-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian-ceph-pvc
spec:
replicas: 1
selector:
matchLabels:
app: xixi
template:
metadata:
labels:
app: xixi
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 volumes]#
[root@master231 volumes]# kubectl apply -f 04-deploy-pvc.yaml
deployment.apps/xiuxian-ceph-pvc created
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-ceph-pvc-6fc5ff4ff6-wxrhh 1/1 Running 0 6s 10.100.0.83 master231
[root@master231 volumes]#
– K8S的csi对接ceph的cephfs动态存储类
1.创建cephFS的sc
[root@master231 examples]# cd csi/cephfs/
[root@master231 cephfs]#
[root@master231 cephfs]# ll
total 48
drwxrwxr-x 2 root root 4096 Jul 4 2024 ./
drwxrwxr-x 5 root root 4096 Jul 4 2024 ../
-rw-rw-r– 1 root root 1681 Jul 4 2024 kube-registry.yaml
-rw-rw-r– 1 root root 488 Jul 4 2024 pod-ephemeral.yaml
-rw-rw-r– 1 root root 321 Jul 4 2024 pod.yaml
-rw-rw-r– 1 root root 268 Jul 4 2024 pvc-clone.yaml
-rw-rw-r– 1 root root 310 Jul 4 2024 pvc-restore.yaml
-rw-rw-r– 1 root root 195 Jul 4 2024 pvc.yaml
-rw-rw-r– 1 root root 587 Jul 4 2024 snapshotclass.yaml
-rw-rw-r– 1 root root 214 Jul 4 2024 snapshot.yaml
-rw-rw-r– 1 root root 1751 Jul 4 2024 storageclass-ec.yaml
-rw-rw-r– 1 root root 1602 Jul 4 2024 storageclass.yaml
[root@master231 cephfs]#
[root@master231 cephfs]# kubectl apply -f storageclass-ec.yaml
storageclass.storage.k8s.io/rook-cephfs created
[root@master231 cephfs]#
[root@master231 cephfs]# kubectl get sc rook-cephfs
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 27s
[root@master231 cephfs]#
2.创建pvc
[root@master231 cephfs]# cat pvc.yaml
—
apiVersion: v1
[root@master231 cephfs]# cat pvc.yaml
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
spec:
accessModes:
# – ReadWriteOnce
– ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
[root@master231 cephfs]#
[root@master231 cephfs]# kubectl apply -f pvc.yaml
persistentvolumeclaim/cephfs-pvc created
[root@master231 cephfs]#
3.查看pv对应的数据
[root@master231 cephfs]# kubectl get pvc cephfs-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-pvc Bound pvc-6c3cce25-ff86-4706-9932-785e486488bc 1Gi RWX rook-cephfs 6s
[root@master231 cephfs]#
[root@master231 cephfs]# kubectl describe pv pvc-6c3cce25-ff86-4706-9932-785e486488bc
Name: pvc-6c3cce25-ff86-4706-9932-785e486488bc
Labels:
Annotations: pv.kubernetes.io/provisioned-by: rook-ceph.cephfs.csi.ceph.com
volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-cephfs-provisioner
volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph
Finalizers: [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection]
StorageClass: rook-cephfs
Status: Bound
Claim: default/cephfs-pvc
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: rook-ceph.cephfs.csi.ceph.com
FSType:
VolumeHandle: 0001-0009-rook-ceph-0000000000000002-6d666583-37c1-4af6-83a1-06e0c06f33bc
ReadOnly: false
VolumeAttributes: clusterID=rook-ceph
fsName=myfs-ec
pool=myfs-ec-erasurecoded
storage.kubernetes.io/csiProvisionerIdentity=1740367264374-8221-rook-ceph.cephfs.csi.ceph.com
subvolumeName=csi-vol-6d666583-37c1-4af6-83a1-06e0c06f33bc
subvolumePath=/volumes/csi/csi-vol-6d666583-37c1-4af6-83a1-06e0c06f33bc/2154f732-8f28-470a-8366-93b49fa7a849
Events:
[root@master231 cephfs]#
4.使用pvc
[root@master231 volumes]# cat 05-deploy-pvc-sc-cephfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian-cephfs-pvc
spec:
replicas: 3
selector:
matchLabels:
app: xixi
template:
metadata:
labels:
app: xixi
spec:
volumes:
– name: data
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false
containers:
– image: registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
name: c1
ports:
– containerPort: 80
volumeMounts:
– name: data
mountPath: /usr/share/nginx/html
[root@master231 volumes]#
[root@master231 volumes]#
[root@master231 volumes]# kubectl apply -f 05-deploy-pvc-sc-cephfs.yaml
deployment.apps/xiuxian-cephfs-pvc created
[root@master231 volumes]#
[root@master231 volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-cephfs-pvc-5f9775c869-6zvtx 1/1 Running 0 19s 10.100.1.196 worker232
xiuxian-cephfs-pvc-5f9775c869-bxfx8 1/1 Running 0 19s 10.100.0.85 master231
xiuxian-cephfs-pvc-5f9775c869-htc2r 1/1 Running 0 19s 10.100.2.51 worker233
[root@master231 volumes]#
52 jenkins集成k8s
– 推送代码到gitee
1.新建gitee项目
略,见视频。推荐项目名称”ysl-ysl-yiliao”。
2.配置git
[root@worker233 ~]# git config –global user.name “forestysl2020”
[root@worker233 ~]# git config –global user.email “y1053419035@qq.com”
[root@worker233 ~]#
3.项目初始化
[root@worker233 ~]# mkdir ysl-ysl-yiliao
[root@worker233 ~]# cd ysl-ysl-yiliao
[root@worker233 ysl-ysl-yiliao]# git init
hint: Using ‘master’ as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config –global init.defaultBranch
hint:
hint: Names commonly chosen instead of ‘master’ are ‘main’, ‘trunk’ and
hint: ‘development’. The just-created branch can be renamed via this command:
hint:
hint: git branch -m
Initialized empty Git repository in /root/ysl-ysl-yiliao/.git/
[root@worker233 ysl-ysl-yiliao]#
4.编写Readme文件
[root@worker233 ysl-ysl-yiliao]# cat README.md
# 1.项目介绍
“`
这是老男孩教育ysl期的Jenkins集成K8S测试项目。
“`
# 2.公司网址
“`
https://www.ysl.com
“`
[root@worker233 ysl-ysl-yiliao]#
5.上传测试代码
[root@worker233 ~]# wget http://192.168.15.253/Resources/Kubernetes/softwares/jenkins/ysl-yiliao.zip
[root@worker233 ~]# unzip ysl-yiliao.zip -d ysl-ysl-yiliao/
6.编写Dockerfile
[root@worker233 ysl-ysl-yiliao]# cat Dockerfile
FROM registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
LABEL school=ysl \
class=ysl
COPY . /usr/share/nginx/html
EXPOSE 80
[root@worker233 ysl-ysl-yiliao]#
7.提交到本地仓库
[root@worker233 ysl-ysl-yiliao]# git add .
[root@worker233 ysl-ysl-yiliao]# git commit -m ‘first commit’
8.推送代码到gitee
[root@worker233 ysl-ysl-yiliao]# git remote add origin https://gitee.com/forestysl2020/ysl-ysl-yiliao.git
[root@worker233 ysl-ysl-yiliao]# git push -u origin “master”
Username for ‘https://gitee.com’: forestysl2020
Password for ‘https://forestysl2020@gitee.com’:
Enumerating objects: 92, done.
Counting objects: 100% (92/92), done.
Delta compression using up to 2 threads
Compressing objects: 100% (92/92), done.
Writing objects: 100% (92/92), 1.48 MiB | 3.41 MiB/s, done.
Total 92 (delta 11), reused 0 (delta 0), pack-reused 0
remote: Powered by GITEE.COM [1.1.5]
remote: Set trace flag 77436c89
To https://gitee.com/forestysl2020/ysl-ysl-yiliao.git
* [new branch] master -> master
Branch ‘master’ set up to track remote branch ‘master’ from ‘origin’.
[root@worker233 ysl-ysl-yiliao]#
9.远程仓库验证
https://gitee.com/forestysl2020/ysl-ysl-yiliao
– Jenkins环境部署
1.安装key
wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
2.添加jenkins的软件源
echo “deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]” \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
3.安装依赖包
[root@jenkins211 ~]# apt-get -y install fontconfig
4.安装JDK
[root@jenkins211 ~]# wget http://192.168.15.253/Resources/Kubernetes/softwares/jenkins/jdk-17_linux-x64_bin.tar.gz
[root@jenkins211 ~]# mkdir -pv /ysl/softwares
mkdir: created directory ‘/ysl’
mkdir: created directory ‘/ysl/softwares’
[root@jenkins211 ~]#
[root@jenkins211 ~]# tar xf jdk-17_linux-x64_bin.tar.gz -C /ysl/softwares
[root@jenkins211 ~]#
[root@jenkins211 ~]# cat > /etc/profile.d/jenkins.sh <
7.添加shell功能编译镜像并推送代码到harbor仓库
docker build -t harbor.ysl.com/ysl-yiliao/demo:v0.1 .
docker login -u admin -p 1 harbor.ysl.com
docker push harbor.ysl.com/ysl-yiliao/demo:v0.1
docker logout harbor.ysl.com
8.harbor仓库验证
略,见视频。
课堂练习:
将医疗项目部署到K8S集群,集群外部基于8899端口访问。
– Jenkins部署服务到K8S集群实战案例
1.下载kubectl工具
[root@jenkins211 ~]# wget http://192.168.15.253/Resources/Kubernetes/softwares/jenkins/kubectl-1.23.17
2.将kubectl添加到PATH变量并授权
[root@jenkins211 ~]# mv kubectl-1.23.17 /usr/local/bin/kubectl
[root@jenkins211 ~]#
[root@jenkins211 ~]# chmod +x /usr/local/bin/kubectl
[root@jenkins211 ~]#
[root@jenkins211 ~]# ll /usr/local/bin/kubectl
-rwxr-xr-x 1 root root 45174784 Sep 4 2023 /usr/local/bin/kubectl*
[root@jenkins211 ~]#
3.拷贝认证文件
[root@jenkins211 ~]# mkdir ~/.kube
[root@jenkins211 ~]# scp 10.0.0.231:/root/.kube/config /root/.kube/
4.修改jenkins的脚本内容
docker build -t harbor.ysl.com/ysl-yiliao/demo:${version} .
docker login -u admin -p 1 harbor.ysl.com
docker push harbor.ysl.com/ysl-yiliao/demo:${version}
docker logout harbor.ysl.com
if [ `kubectl get pods -l app=yiliao | wc -l` -eq 0 ] ; then
kubectl create deployment yiliao –image=harbor.ysl.com/ysl-yiliao/demo:${version} –port=80
kubectl expose deployment yiliao –port=80 –type=NodePort
else
kubectl set image deploy yiliao demo=harbor.ysl.com/ysl-yiliao/demo:${version}
fi
5.立即构建jenkins,验证服务是否部署成功
略,见视频。
– Jenkins回滚服务
1.git参数构建
略,见视频。
2.回滚指令
kubectl set image deploy yiliao demo=harbor.ysl.com/ysl-yiliao/demo:${version}
– Jenkins的Jenkinsfile实现全流程
1.基于pipeline构建项目
pipeline {
agent any
stages {
stage(‘从gitee拉取代码’) {
steps {
git credentialsId: ‘gitee’, url: ‘https://gitee.com/forestysl2020/ysl-ysl-yiliao.git’
}
}
stage(‘编译镜像’) {
steps {
sh ‘docker build -t harbor.ysl.com/ysl-yiliao/demo:${BUILD_NUMBER} .’
}
}
stage(‘推送镜像到harbor仓库’) {
steps {
sh ”’docker login -u admin -p 1 harbor.ysl.com
docker push harbor.ysl.com/ysl-yiliao/demo:${BUILD_NUMBER}
docker logout harbor.ysl.com”’
}
}
stage(‘部署或更新医疗项目’) {
steps {
sh ”’if [ `kubectl get pods -l app=yiliao | wc -l` -eq 0 ] ; then
kubectl create deployment yiliao –image=harbor.ysl.com/ysl-yiliao/demo:${BUILD_NUMBER} –port=80
kubectl expose deployment yiliao –port=80 –type=NodePort
else
kubectl set image deploy yiliao demo=harbor.ysl.com/ysl-yiliao/demo:${BUILD_NUMBER}
fi”’
}
}
}
}
2.编译代码测试
略,见视频。
– devops架构升级到gitops实现思路
– kubeadm证书升级方案
1.kubeadm自建的CA证书有效期为10年
[root@master231 cephfs]# openssl x509 -noout -text -in /etc/kubernetes/pki/ca.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Feb 10 03:55:33 2025 GMT
Not After : Feb 8 03:55:33 2035 GMT
…
2.kubeadm对于各组件的证书有效期为1年
[root@master231 cephfs]# openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4271520686627480594 (0x3b477d6ed929e412)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Feb 10 03:55:33 2025 GMT
Not After : Feb 10 03:55:33 2026 GMT
…
3.kubeadm查看各组件及自建CA证书的有效期
[root@master231 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster…
[check-expiration] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0225 17:41:49.556696 1769598 utils.go:69] The recommended value for “resolvConf” in “KubeletConfiguration” is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 10, 2026 03:55 UTC 349d ca no
apiserver Feb 10, 2026 03:55 UTC 349d ca no
apiserver-etcd-client Feb 10, 2026 03:55 UTC 349d etcd-ca no
apiserver-kubelet-client Feb 10, 2026 03:55 UTC 349d ca no
controller-manager.conf Feb 10, 2026 03:55 UTC 349d ca no
etcd-healthcheck-client Feb 10, 2026 03:55 UTC 349d etcd-ca no
etcd-peer Feb 10, 2026 03:55 UTC 349d etcd-ca no
etcd-server Feb 10, 2026 03:55 UTC 349d etcd-ca no
front-proxy-client Feb 10, 2026 03:55 UTC 349d front-proxy-ca no
scheduler.conf Feb 10, 2026 03:55 UTC 349d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 08, 2035 03:55 UTC 9y no
etcd-ca Feb 08, 2035 03:55 UTC 9y no
front-proxy-ca Feb 08, 2035 03:55 UTC 9y no
[root@master231 ~]#
4.服务端证书续期
[root@master231 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster…
[renew] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0225 17:43:45.817175 1771510 utils.go:69] The recommended value for “resolvConf” in “KubeletConfiguration” is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@master231 ~]#
[root@master231 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster…
[check-expiration] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0225 17:44:12.692574 1771932 utils.go:69] The recommended value for “resolvConf” in “KubeletConfiguration” is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 25, 2026 09:43 UTC 364d ca no
apiserver Feb 25, 2026 09:43 UTC 364d ca no
apiserver-etcd-client Feb 25, 2026 09:43 UTC 364d etcd-ca no
apiserver-kubelet-client Feb 25, 2026 09:43 UTC 364d ca no
controller-manager.conf Feb 25, 2026 09:43 UTC 364d ca no
etcd-healthcheck-client Feb 25, 2026 09:43 UTC 364d etcd-ca no
etcd-peer Feb 25, 2026 09:43 UTC 364d etcd-ca no
etcd-server Feb 25, 2026 09:43 UTC 364d etcd-ca no
front-proxy-client Feb 25, 2026 09:43 UTC 364d front-proxy-ca no
scheduler.conf Feb 25, 2026 09:43 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 08, 2035 03:55 UTC 9y no
etcd-ca Feb 08, 2035 03:55 UTC 9y no
front-proxy-ca Feb 08, 2035 03:55 UTC 9y no
[root@master231 ~]#
5.客户端证书并没有需求
[root@master231 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 6710230616234321792 (0x5d1f87317198e780)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Feb 10 03:55:33 2025 GMT
Not After : Feb 10 03:55:35 2026 GMT
…
6.修改静态Pod的kube-controller-manager资源清单
[root@master231 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
…
spec:
containers:
– command:
– kube-controller-manager
…
# 所签名证书的有效期限。每个 CSR 可以通过设置 spec.expirationSeconds 来请求更短的证书。
– –cluster-signing-duration=87600h0m0s
# 启用controner manager自动签发CSR证书,可以不配置,默认就是启用的,但是建议配置上!害怕未来版本发生变化!
– –feature-gates=RotateKubeletServerCertificate=true
…
[root@master231 ~]# kubectl get pods -n kube-system kube-controller-manager-master231
NAME READY STATUS RESTARTS AGE
kube-controller-manager-master231 1/1 Running 0 12s
[root@master231 ~]#
参考链接:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/
7.要求kubelet的配置文件中支持证书滚动,默认是启用的,无需配置。
[root@worker232 ~]# vim /var/lib/kubelet/config.yaml
…
rotateCertificates: true
8.客户端节点修改节点的时间并重启kubelet
centos操作如下:
[root@worker232 ~]# date -s “2025-6-4”
[root@worker232 ~]#
[root@worker232 ~]# systemctl restart kubelet
ubuntu系统操作如下:
[root@worker232 ~]# timedatectl set-ntp off # 先关闭时间同步服务。
[root@worker232 ~]#
[root@worker232 ~]# timedatectl set-time ‘2026-02-09 15:30:00’ # 修改即将过期的时间的前一天
[root@worker232 ~]#
[root@worker232 ~]# date
Wed Jun 4 03:30:02 PM CST 2025
[root@worker232 ~]#
[root@worker232 ~]# systemctl restart kubelet.service
53 二进制安装k8s
1.环境准备
CPU: 2c+
Memory: 4g+
Disk: 50g+
10.0.0.41 node-exporter41
10.0.0.42 node-exporter42
10.0.0.43 node-exporter43
2.所有节点安装常用的软件包
[root@node-exporter41 ~]# apt -y install bind9-utils expect rsync jq psmisc net-tools lvm2 vim unzip rename golang-cfssl lrzsz
3.node-exporter41节点免密钥登录集群并同步数据
cat >> /etc/hosts <<'EOF'
10.0.0.240 apiserver-lb
10.0.0.41 node-exporter41
10.0.0.42 node-exporter42
10.0.0.43 node-exporter43
EOF
4.配置免密码登录其他节点
cat > password_free_login.sh <<'EOF'
#!/bin/bash
# auther: forest ysl
# 创建密钥对
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q
# 声明你服务器密码,建议所有节点的密码均一致,否则该脚本需要再次进行优化
export mypasswd='XZnh@95599'
# 定义主机列表
k8s_host_list=(node-exporter41 node-exporter42 node-exporter43)
# 配置免密登录,利用expect工具免交互输入
for i in ${k8s_host_list[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
expect {
\"*yes/no*\" {send \"yes\r\"; exp_continue}
\"*password*\" {send \"$mypasswd\r\"; exp_continue}
}"
done
EOF
bash password_free_login.sh
5 编写同步脚本
cat > /usr/local/sbin/data_rsync.sh <<'EOF'
#!/bin/bash
# Auther: forest ysl
if [ $# -lt 1 ];then
echo "Usage: $0 /path/to/file(绝对路径) [mode: m|w]"
exit
fi
if [ ! -e $1 ];then
echo "[ $1 ] dir or file not find!"
exit
fi
fullpath=`dirname $1`
basename=`basename $1`
cd $fullpath
case $2 in
WORKER_NODE|w)
K8S_NODE=(node-exporter42 node-exporter43)
;;
MASTER_NODE|m)
K8S_NODE=(node-exporter42 node-exporter43)
;;
*)
K8S_NODE=(node-exporter42 node-exporter43)
;;
esac
for host in ${K8S_NODE[@]};do
tput setaf 2
echo ===== rsyncing ${host}: $basename =====
tput setaf 7
rsync -az $basename `whoami`@${host}:$fullpath
if [ $? -eq 0 ];then
echo "命令执行成功!"
fi
done
EOF
chmod +x /usr/local/sbin/data_rsync.sh
data_rsync.sh /etc/hosts
6.所有节点Linux基础环境优化
systemctl disable --now NetworkManager ufw
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
free -h
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
cat >> /etc/security/limits.conf <<'EOF'
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
sed -i 's@#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -i 's@^GSSAPIAuthentication yes@GSSAPIAuthentication no@g' /etc/ssh/sshd_config
cat > /etc/sysctl.d/k8s.conf <<'EOF'
# 以下3个参数是containerd所依赖的内核参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6 = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
cat <
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# ‘
EOF
source ~/.bashrc
7.所有节点安装ipvsadm以实现kube-proxy的负载均衡
apt -y install ipvsadm ipset sysstat conntrack
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
8.重复所有节点并验证模块是否加载成功
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
uname -r
ifconfig
free -h
-
1.如果有docker环境,请卸载
./install-docker.sh r
ip link del docker0
2.安装containerd
tar xf ysl-autoinstall-containerd-v1.7.20.tar.gz
./install-containerd.sh i
[root@node-exporter43 autoinstall-containerd-v1.6.36]# ./install-containerd.sh i
Client:
Version: v1.6.36
Revision: 88c3d9bc5b5a193f40b7c14fa996d23532d6f956
Go version: go1.22.7
Server:
Version: v1.6.36
Revision: 88c3d9bc5b5a193f40b7c14fa996d23532d6f956
UUID: 5e5fa4ba-9748-4f60-bbd8-2234dc1b860d
runc version 1.1.15
commit: v1.1.15-0-gbc20cb44
spec: 1.0.2-dev
go: go1.22.3
libseccomp: 2.5.5
安装成功,欢迎下次使用!
- containerd的名称空间,镜像和容器,任务管理快速入门
1.名称空间管理
1.1 查看现有的名称空间
[root@node-exporter41 ~]# ctr ns ls
NAME LABELS
[root@node-exporter41 ~]#
1.2 创建名称空间
[root@node-exporter41 ~]# ctr ns c ysl-linux
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr ns ls
NAME LABELS
ysl-linux
[root@node-exporter41 ~]#
1.3 删除名称空间
[root@node-exporter41 ~]# ctr ns rm ysl-linux
ysl-linux
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr ns ls
NAME LABELS
[root@node-exporter41 ~]#
温馨提示:
删除的名称空间必须为空,否则无法删除!
2.镜像管理
2.1 拉取镜像到指定的名称空间
[root@node-exporter41 ~]# ctr image pull registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ff9c6add3f30f658b4f44732bef1dd44b6d3276853bba31b0babc247f3eba0dc: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2dd61e30a21aeb966df205382a40dcbcf45af975cc0cb836d555b9cd0ad760f5: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:f28fd43be4ad41fc768dcc3629f8479d1443df01ada10ac9a771314e4fdef599: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:dcc43d9a97b44cf3b3619f2c185f249891b108ab99abcc58b19a82879b00b24b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5758d4e389a3f662e94a85fb76143dbe338b64f8d2a65f45536a9663b05305ad: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5dcfac0f2f9ca3131599455f5e79298202c7e1b5e0eb732498b34e9fe4cb1173: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2c6e86e57dfd729d8240ceab7c18bd1e5dd006b079837116bc1c3e1de5e1971a: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:51d66f6290217acbf83f15bc23a88338819673445804b1461b2c41d4d0c22f94: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 3.2 s total: 8.9 Mi (2.8 MiB/s)
unpacking linux/amd64 sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c...
done: 409.869867ms
[root@node-exporter41 ~]# ctr ns ls
NAME LABELS
default
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr -n default i ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1 application/vnd.docker.distribution.manifest.v2+json sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c 9.6 MiB linux/amd64 -
[root@node-exporter41 ~]#
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr i ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v1 application/vnd.docker.distribution.manifest.v2+json sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c 9.6 MiB linux/amd64 -
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr -n ysl-ysl image pull registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:b07c4abce9ebb6b450f348265b5d82616ca5aa7c1a975f124f97df4038275068: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:d65adc8a2f327feacad77611d31986381b47f3c0a1ef8ff2488d781e19c60901: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5758d4e389a3f662e94a85fb76143dbe338b64f8d2a65f45536a9663b05305ad: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:51d66f6290217acbf83f15bc23a88338819673445804b1461b2c41d4d0c22f94: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ff9c6add3f30f658b4f44732bef1dd44b6d3276853bba31b0babc247f3eba0dc: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:dcc43d9a97b44cf3b3619f2c185f249891b108ab99abcc58b19a82879b00b24b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5dcfac0f2f9ca3131599455f5e79298202c7e1b5e0eb732498b34e9fe4cb1173: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2c6e86e57dfd729d8240ceab7c18bd1e5dd006b079837116bc1c3e1de5e1971a: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.5 s total: 0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e...
done: 372.10925ms
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr ns ls
NAME LABELS
default
ysl-ysl
[root@node-exporter41 ~]#
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr -n ysl-ysl images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2 application/vnd.docker.distribution.manifest.v2+json sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e 9.6 MiB linux/amd64 -
[root@node-exporter41 ~]#
2.2 删除镜像
[root@node-exporter41 ~]# ctr -n ysl-ysl images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2 application/vnd.docker.distribution.manifest.v2+json sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e 9.6 MiB linux/amd64 -
[root@node-exporter41 ~]#
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr -n ysl-ysl i rm registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr -n ysl-ysl images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
[root@node-exporter41 ~]#
3.容器管理
3.1 运行一个容器
[root@node-exporter41 ~]# ctr run registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2 haha
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/02/26 03:00:32 [notice] 1#1: using the "epoll" event method
2025/02/26 03:00:32 [notice] 1#1: nginx/1.20.1
2025/02/26 03:00:32 [notice] 1#1: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
2025/02/26 03:00:32 [notice] 1#1: OS: Linux 5.15.0-133-generic
2025/02/26 03:00:32 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:1024
2025/02/26 03:00:32 [notice] 1#1: start worker processes
2025/02/26 03:00:32 [notice] 1#1: start worker process 32
2025/02/26 03:00:32 [notice] 1#1: start worker process 33
2025/02/26 03:00:32 [notice] 1#1: start worker process 34
2025/02/26 03:00:32 [notice] 1#1: start worker process 35
...
3.2 查看容器列表
[root@node-exporter41 ~]# ctr c ls
CONTAINER IMAGE RUNTIME
haha registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2 io.containerd.runc.v2
[root@node-exporter41 ~]#
3.3 查看正在运行的容器ID
[root@node-exporter41 ~]# ctr task ls
TASK PID STATUS
haha 1941 RUNNING
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr t ls
TASK PID STATUS
haha 1941 RUNNING
[root@node-exporter41 ~]#
3.4 连接正在运行的容器
[root@node-exporter41 ~]# ctr t exec -t --exec-id $RANDOM haha sh
/ #
/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #
温馨提示:
containerd本身并不提供网络,只负责容器的生命周期。
将来网络部分交给专门的CNI插件提供。
3.5 杀死一个正在运行的容器
[root@node-exporter41 ~]# ctr t ls
TASK PID STATUS
haha 1941 RUNNING
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr t kill haha
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr t ls
TASK PID STATUS
[root@node-exporter41 ~]#
3.6 删除容器
[root@node-exporter41 ~]# ctr c ls
CONTAINER IMAGE RUNTIME
haha registry.cn-hangzhou.aliyuncs.com/yangsenlin-k8s/apps:v2 io.containerd.runc.v2
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr c rm haha
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ctr c ls
CONTAINER IMAGE RUNTIME
[root@node-exporter41 ~]#
更多containerd学习资料推荐:
https://www.cnblogs.com/yangsenlin/p/18030527
https://www.cnblogs.com/yangsenlin/p/18058010
- 安装K8S程序
1.下载K8S程序包
wget https://dl.k8s.io/v1.31.6/kubernetes-server-linux-amd64.tar.gz
2.解压指定的软件包
[root@node-exporter41 ~]# tar xf kubernetes-server-linux-amd64-v1.31.6.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ll /usr/local/bin/kube*
-rwxr-xr-x 1 root root 90542232 Feb 13 05:45 /usr/local/bin/kube-apiserver*
-rwxr-xr-x 1 root root 84754584 Feb 13 05:45 /usr/local/bin/kube-controller-manager*
-rwxr-xr-x 1 root root 56381592 Feb 13 05:45 /usr/local/bin/kubectl*
-rwxr-xr-x 1 root root 76919128 Feb 13 05:45 /usr/local/bin/kubelet*
-rwxr-xr-x 1 root root 64417944 Feb 13 05:45 /usr/local/bin/kube-proxy*
-rwxr-xr-x 1 root root 63725720 Feb 13 05:45 /usr/local/bin/kube-scheduler*
[root@node-exporter41 ~]#
3.查看kubelet的版本
[root@node-exporter41 ~]# kubelet --version
Kubernetes v1.31.6
[root@node-exporter41 ~]#
4.分发软件包
[root@node-exporter41 ~]# for i in `ls -1 /usr/local/bin/kube*`;do data_rsync.sh $i ;done
- 生成k8s组件相关证书
1.生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@node-exporter41 pki]# mkdir -pv /ysl/certs/{pki,kubernetes}
[root@node-exporter41 pki]# cat > k8s-ca-csr.json <
inet 10.0.0.41 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fec9:e90d prefixlen 64 scopeid 0x20
ether 00:0c:29:c9:e9:0d txqueuelen 1000 (Ethernet)
RX packets 521248 bytes 475366392 (475.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 438247 bytes 335442241 (335.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
…
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# cat > /etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
router_id 10.0.0.41
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 8443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.0.0.41
nopreempt
authentication {
auth_type PASS
auth_pass yangsenlin_k8s
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.0.0.240
}
}
EOF
3.2."node-exporter42"节点创建配置文件
[root@node-exporter42 ~]# ifconfig
eth0: flags=4163
inet 10.0.0.42 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fed2:6aeb prefixlen 64 scopeid 0x20
ether 00:0c:29:d2:6a:eb txqueuelen 1000 (Ethernet)
RX packets 438380 bytes 196700829 (196.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 342527 bytes 42962873 (42.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
…
[root@node-exporter42 ~]#
[root@node-exporter42 ~]# cat > /etc/keepalived/keepalived.conf <
inet 10.0.0.43 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fe02:429e prefixlen 64 scopeid 0x20
ether 00:0c:29:02:42:9e txqueuelen 1000 (Ethernet)
RX packets 321281 bytes 184936745 (184.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 220950 bytes 31132382 (31.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
…
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# cat > /etc/keepalived/keepalived.conf <
200 OK
Service ready.
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# curl http://10.0.0.42:9999/ruok
200 OK
Service ready.
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# curl http://10.0.0.43:9999/ruok
200 OK
Service ready.
[root@node-exporter43 ~]#
5.启动keepalived服务并验证
5.1.所有节点启动keepalived服务
systemctl daemon-reload
systemctl enable –now keepalived
systemctl status keepalived
5.2 验证服务是否正常
[root@node-exporter43 ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0:
link/ether 00:0c:29:02:42:9e brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.43/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.240/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:429e/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE:
link/ipip 0.0.0.0 brd 0.0.0.0
[root@node-exporter43 ~]#
5.3 基于telnet验证haporxy是否正常
[root@node-exporter41 ~]# telnet 10.0.0.240 8443
Tryslg 10.0.0.240…
Connected to 10.0.0.240.
Escape character is ‘^]’.
Connection closed by foreign host.
[root@node-exporter41 ~]#
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# ping 10.0.0.240 -c 3
PING 10.0.0.240 (10.0.0.240) 56(84) bytes of data.
64 bytes from 10.0.0.240: icmp_seq=1 ttl=64 time=0.218 ms
64 bytes from 10.0.0.240: icmp_seq=2 ttl=64 time=0.177 ms
64 bytes from 10.0.0.240: icmp_seq=3 ttl=64 time=0.169 ms
— 10.0.0.240 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2037ms
rtt min/avg/max/mdev = 0.169/0.188/0.218/0.021 ms
[root@node-exporter41 ~]#
5.4 将VIP节点的haproxy停止
[root@node-exporter43 ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0:
link/ether 00:0c:29:02:42:9e brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.43/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.240/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:429e/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE:
link/ipip 0.0.0.0 brd 0.0.0.0
[root@node-exporter43 ~]#
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# systemctl status haproxy.service
● haproxy.service – HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2025-02-26 12:18:14 CST; 5min ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 3389 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 3391 (haproxy)
Tasks: 3 (limit: 9350)
Memory: 4.0M
CPU: 57ms
CGroup: /system.slice/haproxy.service
├─3391 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─3393 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Feb 26 12:18:13 node-exporter43 systemd[1]: Starting HAProxy Load Balancer…
Feb 26 12:18:14 node-exporter43 haproxy[3391]: [WARNING] (3391) : parsing [/etc/haproxy/haproxy.cfg:33] : backend ‘yangsenlin-k8s’ :>
Feb 26 12:18:14 node-exporter43 haproxy[3391]: [NOTICE] (3391) : New worker #1 (3393) forked
Feb 26 12:18:14 node-exporter43 systemd[1]: Started HAProxy Load Balancer.
Feb 26 12:18:14 node-exporter43 haproxy[3393]: [WARNING] (3393) : Server yangsenlin-k8s/node-exporter41 is DOWN, reason: Layer4 conn>
Feb 26 12:18:17 node-exporter43 haproxy[3393]: [WARNING] (3393) : Server yangsenlin-k8s/node-exporter42 is DOWN, reason: Layer4 conn>
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [WARNING] (3393) : Server yangsenlin-k8s/node-exporter43 is DOWN, reason: Layer4 conn>
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [NOTICE] (3393) : haproxy version is 2.4.24-0ubuntu0.22.04.1
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [NOTICE] (3393) : path to executable is /usr/sbin/haproxy
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [ALERT] (3393) : backend ‘yangsenlin-k8s’ has no server available!
[root@node-exporter43 ~]#
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# systemctl stop haproxy.service # 停止服务
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# systemctl status haproxy.service
○ haproxy.service – HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2025-02-26 12:24:12 CST; 34s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 3389 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Process: 3391 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 3391 (code=exited, status=0/SUCCESS)
CPU: 60ms
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [NOTICE] (3393) : path to executable is /usr/sbin/haproxy
Feb 26 12:18:20 node-exporter43 haproxy[3393]: [ALERT] (3393) : backend ‘yangsenlin-k8s’ has no server available!
Feb 26 12:24:12 node-exporter43 systemd[1]: Stopping HAProxy Load Balancer…
Feb 26 12:24:12 node-exporter43 haproxy[3391]: [WARNING] (3391) : Exiting Master process…
Feb 26 12:24:12 node-exporter43 haproxy[3391]: [NOTICE] (3391) : haproxy version is 2.4.24-0ubuntu0.22.04.1
Feb 26 12:24:12 node-exporter43 haproxy[3391]: [NOTICE] (3391) : path to executable is /usr/sbin/haproxy
Feb 26 12:24:12 node-exporter43 haproxy[3391]: [ALERT] (3391) : Current worker #1 (3393) exited with code 143 (Terminated)
Feb 26 12:24:12 node-exporter43 haproxy[3391]: [WARNING] (3391) : All workers exited. Exiting… (0)
Feb 26 12:24:12 node-exporter43 systemd[1]: haproxy.service: Deactivated successfully.
Feb 26 12:24:12 node-exporter43 systemd[1]: Stopped HAProxy Load Balancer.
[root@node-exporter43 ~]#
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# systemctl status keepalived
○ keepalived.service – Keepalive Daemon (LVS and VRRP)
Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2025-02-26 12:24:13 CST; 36s ago
Process: 3409 ExecStart=/usr/sbin/keepalived –dont-fork $DAEMON_ARGS (code=exited, status=0/SUCCESS)
Main PID: 3409 (code=exited, status=0/SUCCESS)
CPU: 573ms
Feb 26 12:20:44 node-exporter43 Keepalived_vrrp[3410]: VRRP_Script(chk_nginx) succeeded
Feb 26 12:20:48 node-exporter43 Keepalived_vrrp[3410]: (VI_1) Entering MASTER STATE
Feb 26 12:24:12 node-exporter43 systemd[1]: Stopping Keepalive Daemon (LVS and VRRP)…
Feb 26 12:24:12 node-exporter43 Keepalived[3409]: Stopping
Feb 26 12:24:12 node-exporter43 Keepalived_vrrp[3410]: VRRP_Script(chk_nginx) failed (due to signal 15)
Feb 26 12:24:12 node-exporter43 Keepalived_vrrp[3410]: (VI_1) Changing effective priority from 100 to 80
Feb 26 12:24:13 node-exporter43 Keepalived_vrrp[3410]: Stopped
Feb 26 12:24:13 node-exporter43 Keepalived[3409]: Stopped Keepalived v2.2.4 (08/21,2021)
Feb 26 12:24:13 node-exporter43 systemd[1]: keepalived.service: Deactivated successfully.
Feb 26 12:24:13 node-exporter43 systemd[1]: Stopped Keepalive Daemon (LVS and VRRP).
[root@node-exporter43 ~]#
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0:
link/ether 00:0c:29:02:42:9e brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.43/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:429e/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE:
link/ipip 0.0.0.0 brd 0.0.0.0
[root@node-exporter43 ~]#
5.5 观察VIP是否漂移
[root@node-exporter42 ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0:
link/ether 00:0c:29:d2:6a:eb brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.42/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.240/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed2:6aeb/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE:
link/ipip 0.0.0.0 brd 0.0.0.0
[root@node-exporter42 ~]#
5.6 启动之前的haproxy,发现VIP并不会飘逸回来
[root@node-exporter43 ~]# systemctl start haproxy.service keepalived.service
[root@node-exporter43 ~]#
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# systemctl status haproxy.service keepalived.service
● haproxy.service – HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2025-02-26 12:26:10 CST; 27s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 4011 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 4013 (haproxy)
Tasks: 3 (limit: 9350)
Memory: 3.9M
CPU: 17ms
CGroup: /system.slice/haproxy.service
├─4013 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─4015 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Feb 26 12:26:10 node-exporter43 systemd[1]: Starting HAProxy Load Balancer…
Feb 26 12:26:10 node-exporter43 haproxy[4013]: [WARNING] (4013) : parsing [/etc/haproxy/haproxy.cfg:33] : backend ‘yangsenlin-k8s’ :>
Feb 26 12:26:10 node-exporter43 haproxy[4013]: [NOTICE] (4013) : New worker #1 (4015) forked
Feb 26 12:26:10 node-exporter43 systemd[1]: Started HAProxy Load Balancer.
Feb 26 12:26:10 node-exporter43 haproxy[4015]: [WARNING] (4015) : Server yangsenlin-k8s/node-exporter41 is DOWN, reason: Layer4 conn>
Feb 26 12:26:14 node-exporter43 haproxy[4015]: [WARNING] (4015) : Server yangsenlin-k8s/node-exporter42 is DOWN, reason: Layer4 conn>
Feb 26 12:26:17 node-exporter43 haproxy[4015]: [WARNING] (4015) : Server yangsenlin-k8s/node-exporter43 is DOWN, reason: Layer4 conn>
Feb 26 12:26:17 node-exporter43 haproxy[4015]: [NOTICE] (4015) : haproxy version is 2.4.24-0ubuntu0.22.04.1
Feb 26 12:26:17 node-exporter43 haproxy[4015]: [NOTICE] (4015) : path to executable is /usr/sbin/haproxy
Feb 26 12:26:17 node-exporter43 haproxy[4015]: [ALERT] (4015) : backend ‘yangsenlin-k8s’ has no server available!
● keepalived.service – Keepalive Daemon (LVS and VRRP)
Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2025-02-26 12:26:15 CST; 23s ago
Main PID: 4050 (keepalived)
[root@node-exporter43 ~]#
[root@node-exporter43 ~]# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0:
link/ether 00:0c:29:02:42:9e brd ff:ff:ff:ff:ff:ff
altname enp2s1
altname ens33
inet 10.0.0.43/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:429e/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE:
link/ipip 0.0.0.0 brd 0.0.0.0
[root@node-exporter43 ~]#
– 部署ApiServer组件
1 node-exporter41节点启动ApiServer
温馨提示:
– “–advertise-address”是对应的master节点的IP地址;
– “–service-cluster-ip-range”对应的是svc的网段
– “–service-node-port-range”对应的是svc的NodePort端口范围;
– “–etcd-servers”指定的是etcd集群地址
配置文件参考链接:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/
具体实操:
1.1 创建node-exporter41节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=forest ysl's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--allow_privileged=true \
--advertise-address=10.0.0.41 \
--service-cluster-ip-range=10.200.0.0/16 \
--service-node-port-range=3000-50000 \
--etcd-servers=https://10.0.0.41:2379,https://10.0.0.42:2379,https://10.0.0.43:2379 \
--etcd-cafile=/ysl/certs/etcd/etcd-ca.pem \
--etcd-certfile=/ysl/certs/etcd/etcd-server.pem \
--etcd-keyfile=/ysl/certs/etcd/etcd-server-key.pem \
--client-ca-file=/ysl/certs/kubernetes/k8s-ca.pem \
--tls-cert-file=/ysl/certs/kubernetes/apiserver.pem \
--tls-private-key-file=/ysl/certs/kubernetes/apiserver-key.pem \
--kubelet-client-certificate=/ysl/certs/kubernetes/apiserver.pem \
--kubelet-client-key=/ysl/certs/kubernetes/apiserver-key.pem \
--service-account-key-file=/ysl/certs/kubernetes/sa.pub \
--service-account-signing-key-file=/ysl/certs/kubernetes/sa.key \
--service-account-issuer=https://kubernetes.default.svc.ysl.com \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/ysl/certs/kubernetes/front-proxy-ca.pem \
--proxy-client-cert-file=/ysl/certs/kubernetes/front-proxy-client.pem \
--proxy-client-key-file=/ysl/certs/kubernetes/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
1.2 启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
ss -ntl | grep 6443
2 node-exporter42节点启动ApiServer
温馨提示:
- "--advertise-address"是对应的master节点的IP地址;
- "--service-cluster-ip-range"对应的是svc的网段
- "--service-node-port-range"对应的是svc的NodePort端口范围;
- "--etcd-servers"指定的是etcd集群地址
具体实操:
2.1 创建node-exporter42节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=forest ysl's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=10.0.0.42 \
--service-cluster-ip-range=10.200.0.0/16 \
--service-node-port-range=3000-50000 \
--etcd-servers=https://10.0.0.41:2379,https://10.0.0.42:2379,https://10.0.0.43:2379 \
--etcd-cafile=/ysl/certs/etcd/etcd-ca.pem \
--etcd-certfile=/ysl/certs/etcd/etcd-server.pem \
--etcd-keyfile=/ysl/certs/etcd/etcd-server-key.pem \
--client-ca-file=/ysl/certs/kubernetes/k8s-ca.pem \
--tls-cert-file=/ysl/certs/kubernetes/apiserver.pem \
--tls-private-key-file=/ysl/certs/kubernetes/apiserver-key.pem \
--kubelet-client-certificate=/ysl/certs/kubernetes/apiserver.pem \
--kubelet-client-key=/ysl/certs/kubernetes/apiserver-key.pem \
--service-account-key-file=/ysl/certs/kubernetes/sa.pub \
--service-account-signing-key-file=/ysl/certs/kubernetes/sa.key \
--service-account-issuer=https://kubernetes.default.svc.ysl.com \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/ysl/certs/kubernetes/front-proxy-ca.pem \
--proxy-client-cert-file=/ysl/certs/kubernetes/front-proxy-client.pem \
--proxy-client-key-file=/ysl/certs/kubernetes/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
2.2 启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
ss -ntl | grep 6443
3.k8s-master03节点启动ApiServer
温馨提示:
- "--advertise-address"是对应的master节点的IP地址;
- "--service-cluster-ip-range"对应的是svc的网段
- "--service-node-port-range"对应的是svc的NodePort端口范围;
- "--etcd-servers"指定的是etcd集群地址
具体实操:
3.1 创建k8s-master03节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=forest ysl's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=10.0.0.43 \
--service-cluster-ip-range=10.200.0.0/16 \
--service-node-port-range=3000-50000 \
--etcd-servers=https://10.0.0.41:2379,https://10.0.0.42:2379,https://10.0.0.43:2379 \
--etcd-cafile=/ysl/certs/etcd/etcd-ca.pem \
--etcd-certfile=/ysl/certs/etcd/etcd-server.pem \
--etcd-keyfile=/ysl/certs/etcd/etcd-server-key.pem \
--client-ca-file=/ysl/certs/kubernetes/k8s-ca.pem \
--tls-cert-file=/ysl/certs/kubernetes/apiserver.pem \
--tls-private-key-file=/ysl/certs/kubernetes/apiserver-key.pem \
--kubelet-client-certificate=/ysl/certs/kubernetes/apiserver.pem \
--kubelet-client-key=/ysl/certs/kubernetes/apiserver-key.pem \
--service-account-key-file=/ysl/certs/kubernetes/sa.pub \
--service-account-signing-key-file=/ysl/certs/kubernetes/sa.key \
--service-account-issuer=https://kubernetes.default.svc.ysl.com \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/ysl/certs/kubernetes/front-proxy-ca.pem \
--proxy-client-cert-file=/ysl/certs/kubernetes/front-proxy-client.pem \
--proxy-client-key-file=/ysl/certs/kubernetes/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
3.2 启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
ss -ntl | grep 6443
- 部署ControlerManager组件
1 所有节点创建配置文件
温馨提示:
- "--cluster-cidr"是Pod的网段地址,我们可以自行修改。
配置文件参考链接:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/
所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
cat > /usr/lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=forest ysl's Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--root-ca-file=/ysl/certs/kubernetes/k8s-ca.pem \
--cluster-signing-cert-file=/ysl/certs/kubernetes/k8s-ca.pem \
--cluster-signing-key-file=/ysl/certs/kubernetes/k8s-ca-key.pem \
--service-account-private-key-file=/ysl/certs/kubernetes/sa.key \
--kubeconfig=/ysl/certs/kubeconfig/kube-controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.100.0.0/16 \
--requestheader-client-ca-file=/ysl/certs/kubernetes/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
2.启动controller-manager服务
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
ss -ntl | grep 10257
- 部署Scheduler组件
1 所有节点创建配置文件
配置文件参考链接:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/
所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
cat > /usr/lib/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=forest ysl's Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--leader-elect=true \
--kubeconfig=/ysl/certs/kubeconfig/kube-scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
2.启动scheduler服务
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
ss -ntl | grep 10259
- 创建Bootstrapping自动颁发kubelet证书配置
1.创建bootstrap-kubelet.kubeconfig文件
温馨提示:
- "--server"指向的是负载均衡器的IP地址,由负载均衡器对master节点进行反向代理哟。
- "--token"也可以自定义,但也要同时修改"bootstrap"的Secret的"token-id"和"token-secret"对应值哟;
1.1 设置集群
kubectl config set-cluster yangsenlin-k8s \
--certificate-authority=/ysl/certs/kubernetes/k8s-ca.pem \
--embed-certs=true \
--server=https://10.0.0.240:8443 \
--kubeconfig=/ysl/certs/kubeconfig/bootstrap-kubelet.kubeconfig
1.2 创建用户
kubectl config set-credentials tls-bootstrap-token-user \
--token=ysl.forestyangsenlin \
--kubeconfig=/ysl/certs/kubeconfig/bootstrap-kubelet.kubeconfig
1.3 将集群和用户进行绑定
kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=yangsenlin-k8s \
--user=tls-bootstrap-token-user \
--kubeconfig=/ysl/certs/kubeconfig/bootstrap-kubelet.kubeconfig
1.4.配置默认的上下文
kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/ysl/certs/kubeconfig/bootstrap-kubelet.kubeconfig
2.所有master节点拷贝管理证书
温馨提示:
下面的操作我以k8s-master01为案例来操作的,实际上你可以使用所有的master节点完成下面的操作哟~
2.1 所有master都拷贝管理员的证书文件
mkdir -p /root/.kube
cp /ysl/certs/kubeconfig/kube-admin.kubeconfig /root/.kube/config
2.2 查看master组件,该组件官方在1.19+版本开始弃用,但是在1.30.2依旧没有移除哟~
[root@node-exporter41 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
[root@node-exporter41 ~]#
2.3 查看集群状态,如果未来cs组件移除了也没关系,我们可以使用"cluster-info"子命令查看集群状态
[root@node-exporter41 ~]# kubectl cluster-info
Kubernetes control plane is running at https://10.0.0.240:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@node-exporter41 ~]#
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# kubectl cluster-info dump
{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "EventList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": [
{
"metadata": {
"name": "kube-controller-manager.1827aef9ccc7bf38",
"namespace": "kube-system",
"uid": "73e095cf-7e1c-44dd-b91f-f64b2d75316a",
"resourceVersion": "399",
"creationTimestamp": "2025-02-26T06:42:48Z"
},
"involvedObject": {
"kind": "Lease",
"namespace": "kube-system",
"name": "kube-controller-manager",
"uid": "8ae595db-8118-4171-94e4-eb1c76f74f6d",
"apiVersion": "coordination.k8s.io/v1",
"resourceVersion": "397"
},
"reason": "LeaderElection",
"message": "node-exporter41_2b4c898b-93fc-4f06-a423-5eaa27bc65c0 became leader",
"source": {
"component": "kube-controller-manager"
},
"firstTimestamp": "2025-02-26T06:42:48Z",
"lastTimestamp": "2025-02-26T06:42:48Z",
"count": 1,
"type": "Normal",
"eventTime": null,
"reportingComponent": "kube-controller-manager",
"reportingInstance": ""
},
{
"metadata": {
"name": "kube-scheduler.1827af1b1ad7bfa7",
"namespace": "kube-system",
"uid": "2b0b35bb-e7e3-4ca0-bf17-66470b9a1bbc",
"resourceVersion": "607",
"creationTimestamp": "2025-02-26T06:45:11Z"
},
"involvedObject": {
"kind": "Lease",
"namespace": "kube-system",
"name": "kube-scheduler",
"uid": "95110033-2f75-43c2-b45e-dd869e5213fe",
"apiVersion": "coordination.k8s.io/v1",
"resourceVersion": "605"
},
"reason": "LeaderElection",
"message": "node-exporter42_5144bbbd-511b-4e02-87f6-5c86a280959a became leader",
"source": {
"component": "default-scheduler"
},
"firstTimestamp": "2025-02-26T06:45:11Z",
"lastTimestamp": "2025-02-26T06:45:11Z",
"count": 1,
"type": "Normal",
"eventTime": null,
"reportingComponent": "default-scheduler",
"reportingInstance": ""
}
]
}
{
"kind": "ReplicationControllerList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "DaemonSetList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "DeploymentList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "ReplicaSetList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "EventList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "ReplicationControllerList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
"uid": "98413c86-991e-4b59-ab2a-dd570b7fc6b7",
"resourceVersion": "205",
"creationTimestamp": "2025-02-26T06:35:00Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
}
},
"spec": {
"ports": [
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 6443
}
],
"clusterIP": "10.200.0.1",
"clusterIPs": [
"10.200.0.1"
],
"type": "ClusterIP",
"sessionAffinity": "None",
"ipFamilies": [
"IPv4"
],
"ipFamilyPolicy": "SingleStack",
"internalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {}
}
}
]
}
{
"kind": "DaemonSetList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "DeploymentList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "ReplicaSetList",
"apiVersion": "apps/v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1248"
},
"items": []
}
[root@node-exporter41 ~]#
3 创建bootstrap-secret授权
3.1 创建配bootstrap-secret文件用于授权
[root@k8s-node-exporter41 ~]# cat > bootstrap-secret.yaml <
node-exporter42 NotReady
node-exporter43 NotReady
[root@node-exporter41 ~]#
[root@node-exporter42 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-exporter41 NotReady
node-exporter42 NotReady
node-exporter43 NotReady
[root@node-exporter42 ~]#
[root@node-exporter43 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-exporter41 NotReady
node-exporter42 NotReady
node-exporter43 NotReady
[root@node-exporter43 ~]#
7 可以查看到有相应的csr用户客户端的证书请求
[root@node-exporter41 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-4hds8 2m46s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:ysl
csr-8ggts 2m48s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:ysl
csr-cnr9k 6m32s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:ysl
[root@node-exporter41 ~]#
– 部署worker节点之kube-proxy服务
1.生成kube-proxy的csr文件
[root@node-exporter41 pki]# cat > kube-proxy-csr.json <
[root@node-exporter41 calico]#
彩蛋:
导出镜像
[root@node-exporter41 calico]# ctr -n k8s.io i export ysl-operator-v1.36.5.tar.gz quay.io/tigera/operator:v1.36.5
[root@node-exporter41 calico]#
导入镜像:
[root@node-exporter41 calico]# ctr i import ysl-operator-v1.36.5.tar.gz
unpacking quay.io/tigera/operator:v1.36.5 (sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b)…done
[root@node-exporter41 calico]#
[root@node-exporter41 calico]# ctr -n k8s.io i import ysl-operator-v1.36.5.tar.gz
unpacking quay.io/tigera/operator:v1.36.5 (sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b)…done
[root@node-exporter41 calico]#
4.下载自定义资源
[root@node-exporter41 calico]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml
[root@node-exporter41 calico]# grep 192 custom-resources.yaml
cidr: 192.168.0.0/16
[root@node-exporter41 calico]#
[root@node-exporter41 calico]#
[root@node-exporter41 calico]# sed -i ‘/192/s#192.168#10.100#’ custom-resources.yaml
[root@node-exporter41 calico]#
[root@node-exporter41 calico]# grep 100 custom-resources.yaml
cidr: 10.100.0.0/16
[root@node-exporter41 calico]#
[root@node-exporter41 calico]# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[root@node-exporter41 calico]#
– 回退方案使用falnnel
1.下载资源清单
[root@node-exporter41 ~]# wget http://192.168.15.253/Resources/Kubernetes/K8S%20Cluster/kube-flannel.yml
2.部署服务
[root@node-exporter41 ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@node-exporter41 ~]#
3.补全cni插件不足的问题
wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
svip:
wget http://192.168.15.253/Resources/Kubernetes/Add-ons/cni/Flannel/cni-plugins-linux-amd64-v1.6.2.tgz
tar xf cni-plugins-linux-amd64-v1.6.2.tgz -C /opt/cni/bin/
4.开启自动补全功能
kubectl completion bash > ~/.kube/completion.bash.inc
echo “source ‘$HOME/.kube/completion.bash.inc'” >> $HOME/.bashrc
source $HOME/.bashrc
5.部署CoreDNS服务
5.1 下载资源清单
wget http://192.168.15.253/Resources/Kubernetes/Add-ons/CoreDNS/coredns.yaml.base
参考链接:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
5.2 修改关键字段
__DNS__DOMAIN__
ysl.com
__DNS__MEMORY__LIMIT__
200Mi
__DNS__SERVER__
10.200.0.254
5.3 部署CoreDNS组件
[root@node-exporter41 ~]# kubectl apply -f coredns.yaml.base
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@node-exporter41 ~]#
5.4 存在的问题
[FATAL] plugin/loop: Loop (127.0.0.1:42763 -> :53) detected for zone “.”, see https://coredns.io/plugins/loop#troubleshooting. Query: “HINFO 4217337441363791670.8544625935495254661.”
解决方案:
1.准备解析的配置文件
[root@node-exporter41 ~]# cat /etc/kubernetes/resolv.conf
nameserver 223.5.5.5
options edns0 trust-ad
search .
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# data_rsync.sh /etc/kubernetes/resolv.conf
2.kubelet指定dns文件
[root@node-exporter41 ~]# grep ^resolvConf /etc/kubernetes/kubelet-conf.yml
resolvConf: /etc/kubernetes/resolv.conf
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# data_rsync.sh /etc/kubernetes/kubelet-conf.yml
3.重启服务使得配置生效
systemctl restart kubelet.service
4.解决后续报错:
…
Normal SandboxChanged 0s (x2 over 3m3s) kubelet Pod sandbox changed, it will be killed and re-created.
…
可能跟containerd有关,建议更换containerd 1.6LTS测试。
参考命令:
./install-containerd.sh r
rm -rf *
wget http://192.168.15.253/Resources/Containerd/ysl-autoinstall-containerd-v1.6.36.tar.gz
tar xf ysl-autoinstall-containerd-v1.6.36.tar.gz
./install-containerd.sh i
5.验证DNS服务
[root@node-exporter41 ~]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.200.0.1
kube-system kube-dns ClusterIP 10.200.0.254
[root@node-exporter41 ~]# dig @10.200.0.254 kube-dns.kube-system.svc.ysl.com +short
10.200.0.254
[root@node-exporter41 ~]#
[root@node-exporter41 ~]# dig @10.200.0.254 kubernetes.default.svc.ysl.com +short
10.200.0.1
[root@node-exporter41 ~]#