kubeadm多master集群升级k8s版本

版本说明

本次升级版本为从1.15.3升级至1.16.3。另外更高的k8s版本,要注意内核要为4.4以上,尤其是1.18版本。

升级

master节点升级

查看当前集群组件列表

1
2
3
4
5
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.sy.com Ready master 3d9h v1.15.3
master02.sy.com Ready master 3d9h v1.15.3
master03.sy.com Ready master 3d9h v1.15.3
1
2
3
4
5
6
7
8
9
10
11
12
[root@master01 ~]# kubeadm config images list
I0521 20:18:24.336912 23537 version.go:248] remote version is much newer: v1.18.3; falling back to: stable-1.15
W0521 20:18:34.337440 23537 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.15.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.15.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0521 20:18:34.337495 23537 version.go:99] falling back to the local client version: v1.15.3
k8s.gcr.io/kube-apiserver:v1.15.3
k8s.gcr.io/kube-controller-manager:v1.15.3
k8s.gcr.io/kube-scheduler:v1.15.3
k8s.gcr.io/kube-proxy:v1.15.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@master01 ~]#

升级 Kubeadm 工具版本

1
[root@master01 ~]# yum update -y kubeadm-1.16.3-0

查看待升级的 kubernetes 组件镜像列表

1
2
3
4
5
6
7
8
9
10
[root@master01 ~]# kubeadm config images list
W0521 20:33:34.091388 32299 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0521 20:33:34.091559 32299 version.go:102] falling back to the local client version: v1.16.3
k8s.gcr.io/kube-apiserver:v1.16.3
k8s.gcr.io/kube-controller-manager:v1.16.3
k8s.gcr.io/kube-scheduler:v1.16.3
k8s.gcr.io/kube-proxy:v1.16.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

创建镜像脚本,并打tag

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master01 ~]# cat pull-image.sh 
## 设置镜像仓库地址
MY_REGISTRY=registry.aliyuncs.com/google_containers

## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver:v1.16.3
docker pull ${MY_REGISTRY}/kube-controller-manager:v1.16.3
docker pull ${MY_REGISTRY}/kube-scheduler:v1.16.3
docker pull ${MY_REGISTRY}/kube-proxy:v1.16.3
docker pull ${MY_REGISTRY}/etcd:3.3.15-0
docker pull ${MY_REGISTRY}/pause:3.1
docker pull ${MY_REGISTRY}/coredns:1.6.2
## 设置标签
docker tag ${MY_REGISTRY}/kube-apiserver:v1.16.3 k8s.gcr.io/kube-apiserver:v1.16.3
docker tag ${MY_REGISTRY}/kube-scheduler:v1.16.3 k8s.gcr.io/kube-scheduler:v1.16.3
docker tag ${MY_REGISTRY}/kube-controller-manager:v1.16.3 k8s.gcr.io/kube-controller-manager:v1.16.3
docker tag ${MY_REGISTRY}/kube-proxy:v1.16.3 k8s.gcr.io/kube-proxy:v1.16.3
docker tag ${MY_REGISTRY}/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2

升级kubeadm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master01 ~]# kubectl drain master01.sy.com --ignore-daemonsets
[root@master01 ~]# kubeadm upgrade apply v1.16.3
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.3"
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
.........
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.3". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级 Kubelet

1
2
[root@master01 ~]# yum update -y kubelet-1.16.3-0
[root@master01 ~]# systemctl daemon-reload && systemctl restart kubelet

看日志发现kubelet有报错

1
cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d

解决

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
在v1.16中,kubelet将验证cni配置文件
在cbr0 这一行上面新增一行:
cni-conf.json: |
{
"cniVersion":"0.2.0",
"name": "cbr0",



flannal镜像改成quay.io/coreos/flannel:v0.11.0-amd64

在看node状态
[root@master01 ~]# kubectl uncordon master01.sy.com
node/master01.sy.com uncordoned
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.sy.com Ready master 4d3h v1.16.3
master02.sy.com Ready master 4d3h v1.15.3
master03.sy.com Ready master 4d2h v1.15.3

其他master节点升级也一样操作,不过命令替换一下

1
2
3
4
5
6
7
[root@master02 ~]# kubeadm upgrade node experimental-control-plane

[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.sy.com Ready master 4d3h v1.16.3
master02.sy.com Ready master 4d3h v1.16.3
master03.sy.com Ready master 4d3h v1.16.3

node节点升级

1
2
3
4
kubectl drain [节点名称] --ignore-daemonsets
yum update -y kubeadm-1.16.3-0
yum update -y kubelet-1.16.3-0
systemctl daemon-reload && systemctl restart kubelet

参考链接

Donate