使用Kubeadm安装Kubernetes-1.34.2
本笔记主要记录使用kubeadm在ubuntu22.04.05服务器上部署kubernetes集群,选用Nerdctl
环境介绍
| 节点 | IP | 角色 |
|---|---|---|
| k8s-master01 | 10.0.0.10 | 控制节点 |
| k8s-worker01 | 10.0.0.20 | 工作节点 |
先决条件
该环节在所有节点都要执行
主机解析
# cat /etc/hosts
10.0.0.10 k8s-master01
10.0.0.20 k8s-worker01
配置节点免密
ssh-keygen -N "" -f /root/.ssh/id_rsa -q
ssh-copy-id k8s-master01
ssh-copy-id k8s-worker01
配置时间同步
安装并配置时间同步服务器,这里选择使用阿里云的时间同步服务器,可以按照自己需求,更改时间同步服务器。
apt-get install -y chrony
sed -i 's/^pool/#pool/' /etc/chrony/chrony.conf
sed -i '0,/^#pool.*/{s/^#pool.*/server ntp.aliyun.com iburst\n&/}' /etc/chrony/chrony.conf
更改时区为亚洲/上海
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
重启生效
systemctl restart chrony && systemctl enable chrony
查看同步状态
chronyc sources -v
安装Nerdctl工具
替代 Docker CLI 的工具,专为 containerd 设计,提供类似 docker 命令的用户体验。
下载链接如下,建议下载之后,通过远程工具传到机器中。
https://github.com/containerd/nerdctl/releases/download/v2.2.0/nerdctl-full-2.2.0-linux-amd64.tar.gz
# ls
nerdctl-full-2.2.0-linux-amd64.tar.gz
解压至 /usr/local/ 路径下
tar Cxzvvf /usr/local nerdctl-full-2.2.0-linux-amd64.tar.gz
查看 containerd 版本
# containerd -v
containerd github.com/containerd/containerd/v2 v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
查看 nerdctl 版本
# nerdctl -v
nerdctl version 2.2.0
查看 runc 版本
# runc -v
runc version 1.3.3
commit: v1.3.3-0-gd842d77
spec: 1.2.1
go: go1.25.4
libseccomp: 2.6.0
更改sandbox镜像仓库地址
创建一个配置文件夹 /etc/containerd,然后生成默认的配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
查看配置文件,这里将镜像改成aliyun上的镜像
# cat /etc/containerd/config.toml | grep registry -A 2
sandbox = 'registry.k8s.io/pause:3.10.1'
[plugins.'io.containerd.cri.v1.images'.registry]
config_path = '/etc/containerd/certs.d:/etc/docker/certs.d'
sed -i "s|sandbox = 'registry.k8s.io/pause:3.10.1'|sandbox = 'registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10.1'|" /etc/containerd/config.toml
配置镜像加速
创建对应子目录
mkdir -p /etc/containerd/certs.d/{docker.io,quay.io,registry.k8s.io}
cat >/etc/containerd/certs.d/docker.io/hosts.toml <<EOF
server = "https://docker.io"
[host."https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
EOF
cat >/etc/containerd/certs.d/quay.io/hosts.toml <<EOF
server = "https://quay.io"
[host."https://quay.m.daocloud.io"]
capabilities = ["pull", "resolve"]
EOF
cat >/etc/containerd/certs.d/registry.k8s.io/hosts.toml <<EOF
server = "https://registry.k8s.io"
[host."https://k8s.m.daocloud.io"]
capabilities = ["pull", "resolve"]
EOF
配置完镜像加速后,启动containerd和buildkit服务
systemctl daemon-reload
systemctl enable --now containerd
systemctl enable --now buildkit
添加命令自动补齐功能
nerdctl completion bash > /etc/bash_completion.d/nerdctl
source /etc/bash_completion.d/nerdctl
拉取一个镜像,验证镜像加速是否有效
nerdctl pull nginx
下载iptables
apt-get install -y iptables
创建一个容器验证containerd的工作是否正常
nerdctl run -d -p 8080:80 --name nginx docker.io/library/nginx:latest
查看容器状态
# nerdctl ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
84b51f321552 docker.io/library/nginx:latest "/docker-entrypoint.…" 3 seconds ago Up 0.0.0.0:8080->80/tcp nginx
访问nginx服务测试
# curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
验证完即可删除容器
nerdctl stop nginx && nerdctl rm nginx
部署Kubernetes
关闭swap分区
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
安装 kubeadm
这里使用 清华大学开源软件镜像站 进行加速,本次安装的k8s版本为1.34
安装必要工具
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates gpg
获取gpg文件
curl -fsSL https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core%3A/stable%3A/v1.34/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
将密钥文件位置和清华apt源写进/etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
更新apt源
apt update -y
# 验证kubeadm kubelet kubectl版本是否为1.34
apt list | grep kubelet
kubelet/unknown 1.34.2-1.1 amd64
kubelet/unknown 1.34.2-1.1 arm64
kubelet/unknown 1.34.2-1.1 ppc64el
kubelet/unknown 1.34.2-1.1 s390x
安装kubeadm kubectl kubelet
sudo apt-get install -y kubelet kubeadm kubectl
锁定版本,防止被apt upgrade意外更新
sudo apt-mark hold kubelet kubeadm kubectl
添加命令自动补齐功能
kubectl completion bash > /etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
source /etc/bash_completion.d/kubectl
source /etc/bash_completion.d/kubeadm
集成Containerd
crictl 是一个用来管理容器运行时的命令行工具,它就像是一个“中间人”,帮助 Kubernetes 和容器运行时(比如 containerd)之间进行通信。
- runtime-endpoint
这个字段告诉 crictl,容器运行时(containerd)的运行时接口地址在哪里。这里写的是
unix:///run/containerd/containerd.sock,意思就是通过 Unix 套接字(socket)的方式,连接到/run/containerd/containerd.sock这个地址。简单来说,就是告诉 crictl 怎么和 containerd 通信。 - image-endpoint
这个字段和
runtime-endpoint类似,不过它是用来指定镜像服务的接口地址。这里也是unix:///run/containerd/containerd.sock,说明镜像服务和运行时服务是同一个地址,都是通过 containerd 来管理的。
cat > /etc/crictl.yaml <<-'EOF'
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
验证,不报错(WARN)就是成功了
root@k8s-master01:~# crictl images
IMAGE TAG IMAGE ID SIZE
添加命令自动补齐功能
crictl completion bash > /etc/bash_completion.d/crictl
source /etc/bash_completion.d/crictl
集群部署
下方kubeadm.yaml中name字段必须在网络中可被解析,也可以将解析记录添加到集群中所有机器的/etc/hosts中
这个初始化集群部署的操作只在master上执行
参数说明
apiserver-advertise-address 集群通告地址
imageRepository 镜像仓库
kubernetesVersion K8s版本,与上面安装的一致
创建kubeadm配置文件,并进行修改
kubeadm config print init-defaults > kubeadm.yaml
sed -i 's/.*advert.*/ advertiseAddress: 10.0.0.10/g' kubeadm.yaml
sed -i 's/.*name.*/ name: k8s-master01/g' kubeadm.yaml
sed -i 's|imageRepo.*|imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers|g' kubeadm.yaml
sed -i "/^\\s*networking:/a\\ podSubnet: 172.16.0.0/16" kubeadm.yaml
sed -i 's/kubernetesVersion:.*$/kubernetesVersion: v1.34.2/' kubeadm.yaml
预先拉取镜像
# kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.34.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.34.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.34.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.34.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.12.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.6.5-0
初始化控制节点
kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c8bbc17f41d682e22e51b1d1ebeb316669e1eff98f8e6120a13de4cfb147a7e2
授予管理权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署Calico网络插件
这个Calico网络插件部署的操作只能在master上执行
下面是Calico在Github中的链接,如果无法通过网络直接 wget 的,建议将内容复制进机器中
https://github.com/projectcalico/calico/blob/release-v3.31/manifests/calico.yaml
修改Calico插件的Pod子网IP地址池
这里面 CALICO_IPV4POOL_CIDR 参数的值需要跟 kubeadm.yaml 配置文件中的 podSubnet 的值保持一致
sed -n "7218,7219p" calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "172.16.0.0/16" # podSubnet 或 --pod-network-cidr的值
部署calico.yaml
kubectl apply -f calico.yaml
root@k8s-master01:~# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5c676f698c-9285f 1/1 Running 0 8m56s
calico-node-hsqlv 1/1 Running 0 8m56s
coredns-7ddb67b59b-5cwcf 1/1 Running 0 65m
coredns-7ddb67b59b-t7xs6 1/1 Running 0 65m
etcd-k8s-master01 1/1 Running 0 65m
kube-apiserver-k8s-master01 1/1 Running 0 65m
kube-controller-manager-k8s-master01 1/1 Running 1 65m
kube-proxy-hclpd 1/1 Running 0 65m
kube-scheduler-k8s-master01 1/1 Running 0 65m
root@k8s-master01:~# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 65m v1.34.2
工作节点加入集群
可以使用控制节点初始化的时候的命令直接加入,如果找不到了,可以使用命令 kubeadm 生成 token
kubeadm token create --print-join-command
root@k8s-master01:~# kubeadm token create --print-join-command
kubeadm join 10.0.0.10:6443 --token 6glhm9.vjdpiv2p7ug011gr --discovery-token-ca-cert-hash sha256:c8bbc17f41d682e22e51b1d1ebeb316669e1eff98f8e6120a13de4cfb147a7e2
工作节点加入
root@k8s-worker01:~# kubeadm join 10.0.0.10:6443 --token 6glhm9.vjdpiv2p7ug011gr --discovery-token-ca-cert-hash sha256:c8bbc17f41d682e22e51b1d1ebeb316669e1eff98f8e6120a13de4cfb147a7e2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 511.262167ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
验证
使用 kubectl 命令查看节点状态
kubectl get nodes
给 k8s-worker01 打上角色标签
kubectl label nodes k8s-worker01 node-role.kubernetes.io/worker=
root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 101m v1.34.2
k8s-worker01 Ready worker 12m v1.34.2
工作节点使用 kubectl
工作节点在加入集群后,无法直接使用 kubectl 命令操作集群,可以进行以下操作:
控制节点节点传递 /etc/kubernetes/admin.conf文件给工作节点
scp /etc/kubernetes/admin.conf root@k8s-worker01:/etc/kubernetes/
工作节点上设置 Kubeconfig 环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
这样就可以在工作节点访问集群啦
root@k8s-worker01:~# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 108m v1.34.2
k8s-worker01 Ready worker 19m v1.34.2
重置集群
如果在安装好集群的情况下,想重复练习初始化集群,或者包括初始化集群报错在内的任何原因,想重新初始化集群时,可以用下面的方法重置集群,重置后,集群就会被删除,可以用于重新部署,一般来说,这个命令仅用于k8s-master01这个节点
root@k8s-master:~# kubeadm reset --cri-socket=unix:///run/containerd/containerd.sock
...
[reset] Are you sure you want to proceed? [y/N]: y
...
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
根据提示,手工完成文件和规则的清理
root@k8s-master:~# rm -rf /etc/cni/net.d
root@k8s-master:~# iptables -F
root@k8s-master:~# rm -rf $HOME/.kube/config
清理后就可以重新部署集群了