本站文章总数为:165
Search Posts

centos7.9 部署Kubernetes 1.26.2教程

内容纲要

参考教程http://www.xbhp.cn/news/35670.html

看网站是私人站点怕后面它挂了 这边记录做下备忘 那我挂了呢 哈哈>本地备份

硬件要求
可省略的步骤
配置虚拟机ip
设置阿里镜像源
各服务器初始化配置
配置主节点的主机名称
配置从节点的主机名称
配置各节点的Host文件
关闭各节点的防火墙
关闭selinux
永久禁用各节点的交换分区
同步各节点的时间
将桥接的IPv4流量传递到iptables的链(三台都执行)
在所有节点安装docker
在所有节点安装容器运行时cri-dockerd
在所有节点安装 kubenetes相关组件
重启虚拟机(必要)
在master节点安装部署k8s master
查看国内镜像
所有master节点拉取k8s镜像
初始化master节点(仅在master节点执行)
在集群中加入node节点(仅在node节点执行)
查看集群节点信息
安装集群网络(CNI)本文选择Calico
问题排查
使用rancher管理k8s集群
更多内容请参考gitee地址

硬件要求

1、Master主机:2核CPU、4G内存、20G硬盘
2、Node主机:4+核CPU、8G+内存、40G+硬盘
2、集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
3、节点之中不可以有重复的主机名、MAC 地址或 product_uuid
4、开启机器上的某些端口
5、为了保证 kubelet 正常工作,必须禁用交换分区

可省略的步骤

配置虚拟机ip

vi /etc/sysconfig/network-scripts/ifcfg-ens33

公司网关是192.168.0.252 正常按你路由器后台IP填写

如果是光盘引导也可以开始安装的图形界面直接设置网络更快

ONBOOT=yes
IPADDR=192.168.0.202
NETMASK=255.255.255.0
GATEWAY=192.168.0.252
DNS1=8.8.8.8
DNS2=114.114.114.114

systemctl restart netowrk

设置阿里镜像源

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 清除yum缓存
yum clean all
# 缓存阿里云镜像
yum makecache

各服务器初始化配置

配置主节点的主机名称

hostnamectl set-hostname k8smaster && hostname # 设置主节点1的主机名称

配置从节点的主机名称

hostnamectl set-hostname k8snode01 && hostname # 设置从节点1的主机名称

配置各节点的Host文件

#根据自己设备ip配置
vi /etc/hosts # 编辑文件,注意不能有空格
192.168.0.202 k8smaster
192.168.0.203 k8snode01

关闭各节点的防火墙

systemctl disable firewalld && systemctl stop firewalld

关闭selinux

查看selinux信息

sestatus

临时禁用

setenforce 0

永久禁用

1、打开/etc/sysconfig/selinux文件
2、将配置SELinux=enforcing改为SELinux=disabled

永久禁用各节点的交换分区

swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab # 注释掉swap那一行
free -h 查看是分区空间

同步各节点的时间

K8S学习笔记之CentOS7集群使用Chrony实现时间同步
我装的镜像有了这个我跳过,正常时间看下都没啥出入

yum install chrony -y
vim /etc/chrony.conf
#修改时间服务器为阿里云服务器 ntp1.aliyun.com
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com iburst

启动服务

systemctl enable chronyd.service && systemctl start chronyd.service && systemctl status chronyd.service

每台节点查看时间是否一致

date 

将桥接的IPv4流量传递到iptables的链(三台都执行)

vim /etc/sysctl.d/k8s.conf 
#写入如下两行的参数

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1#使配置生效命令
sysctl --system # 生效

在所有节点安装docker

docker 官方安装命令

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

安装docker时提示 docker-buildx-plugin-0.10.2-1.el7.x86_64.rpm 的公钥尚未安装
然后重新执行以上上面安装脚本

wget http://mirrors.aliyun.com/centos/7.9.2009/os/x86_64/RPM-GPG-KEY-CentOS-7
rpm --import RPM-GPG-KEY-CentOS-7

配置docker参数

 vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://lvww6o3e.mirror.aliyuncs.com"]
}

重启docker

systemctl daemon-reload && systemctl start docker && systemctl enable docker

查看docker信息

docker info

在所有节点安装容器运行时cri-dockerd

cri-docker git 地址:
https://github.com/Mirantis/cri-dockerd/releases
下载传到服务器

通过rpm安装:

rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm

配置cri-docker使用国内镜像

vim /usr/lib/systemd/system/cri-docker.service

修改配置如下:
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.9

启动服务

systemctl daemon-reload && systemctl start cri-docker && systemctl enable cri-docker

查看服务状态

systemctl status cri-docker

在所有节点安装 kubenetes相关组件

配置yum 源

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装kubeadm、kubelete、kubectl 组件

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet --now

重启虚拟机(必要)

配置完前面的信息之后需要重启虚拟机,否则会造成 api-server等容器启动异常

在master节点安装部署k8s master

查看国内镜像

 kubeadm config images list --image-repository registry.aliyuncs.com/google_containers

所有master节点拉取k8s镜像

原文1.26.1 我部署的时候1.26.2 所以下面改为1.26.2
kubeadm config images pull --kubernetes-version=v1.26.2 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
#查看镜像
docker images 

初始化master节点(仅在master节点执行)

初始化命令
kubeadm init --apiserver-advertise-address=192.168.0.202 --kubernetes-version=v1.26.2 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --token-ttl=0 --cri-socket=unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers --upload-certs --ignore-preflight-errors=all

初始化成功之后的打印

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.010325 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d6a23fbdbcfc00d73dbc9bbe6bf56b0aa5d7b0c4e01c45995b4d5fa6470064b2
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 04cm95.gzyodpm3xsflupzf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.202:6443 --token 04cm95.gzyodpm3xsflupzf \
    --discovery-token-ca-cert-hash sha256:0e963a559fb80237a80f884a65ad81a1f6768e6c38f0c646a434df361858d6e5 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

按照提示执行如下命令

在集群中加入node节点(仅在node节点执行)

为了使之前的配置生效(重启节点)

复制上图中的命令 并且设置cri-socket, 命令如下(那个换行\号删掉):

kubeadm join 192.168.0.202:6443 --token gqg5qi.ont18pcs9dq3wcez --discovery-token-ca-cert-hash sha256:7c39ec62c28330ef887a8fc3e5638cf36122e852876f9661b6b7d504899f5d46  --cri-socket=unix:///var/run/cri-dockerd.sock

执行成功返回示例

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群节点信息

在master节点执行如下命令

kubectl get nodes
[root@k8smaster ~]# kubectl get nodes
NAME        STATUS     ROLES           AGE     VERSION
k8smaster   NotReady   control-plane   30m     v1.26.2
k8snode01   NotReady   <none>          5m40s   v1.26.2

安装集群网络(CNI)本文选择Calico

参考官网地址:https://docs.tigera.io/calico/3.25/getting-started/kubernetes/self-managed-onprem/onpremises
查看calico支持的kubernetes版本信息
https://docs.tigera.io/calico/3.25/getting-started/kubernetes/requirements

安装calico网络步骤,按照官网的步骤执行如下两条命令

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
kubectl apply -f calico.yaml

成功结果如下:

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

如果安装 calico 遇到的问题以及解决方法(我未遇到)

遇到上述报错表示calico版本和k8s版本不匹配, 可根据上述步骤下载对应的calico版本
可以先下载如下版本的镜像
calico/node v3.25.0
calico/pod2daemon-flexvol v3.25.0
calico/cni v3.25.0
calico/kube-controllers v3.25.0

等待一两分钟查看集群信息

STATUS NotReady 变为 Ready

kubectl get nodes
NAME        STATUS   ROLES           AGE     VERSION
k8smaster   Ready    control-plane   30m     v1.26.2
k8snode01   Ready    <none>          6m21s   v1.26.2

查看pod信息

kubectl get pods -A 
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-zmzpz   0/1     ContainerCreating   0          2m29s
kube-system   calico-node-kdxtb                         0/1     PodInitializing     0          2m29s
kube-system   calico-node-vvf22                         1/1     Running             0          2m29s
kube-system   coredns-5bbd96d687-c57bv                  0/1     ContainerCreating   0          31m
kube-system   coredns-5bbd96d687-dhjt4                  0/1     ContainerCreating   0          31m
kube-system   etcd-k8smaster                            1/1     Running             0          31m
kube-system   kube-apiserver-k8smaster                  1/1     Running             0          31m
kube-system   kube-controller-manager-k8smaster         1/1     Running             0          31m
kube-system   kube-proxy-588vm                          1/1     Running             0          7m29s
kube-system   kube-proxy-xntcv                          1/1     Running             0          31m
kube-system   kube-scheduler-k8smaster                  1/1     Running             0          31m

等待一两分钟分钟再查状态就全变成Running

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-zmzpz   1/1     Running   0          3m57s
kube-system   calico-node-kdxtb                         1/1     Running   0          3m57s
kube-system   calico-node-vvf22                         1/1     Running   0          3m57s
kube-system   coredns-5bbd96d687-c57bv                  1/1     Running   0          33m
kube-system   coredns-5bbd96d687-dhjt4                  1/1     Running   0          33m
kube-system   etcd-k8smaster                            1/1     Running   0          33m
kube-system   kube-apiserver-k8smaster                  1/1     Running   0          33m
kube-system   kube-controller-manager-k8smaster         1/1     Running   0          33m
kube-system   kube-proxy-588vm                          1/1     Running   0          8m57s
kube-system   kube-proxy-xntcv                          1/1     Running   0          33m
kube-system   kube-scheduler-k8smaster                  1/1     Running   0          33m

此时集群状态变成Ready表示calico已经成功安装

以下原作者附录 没用到但是做下记录

步步踩坑啊-------------------------奖励自己不放弃,哈哈哈

问题排查
1、在安装calico网络之前coredns一直处于pending状态,

查看coredns的配置文件命令
kubectl edit cm coredns -n kube-system
查看集群中的所有pod命令
kubectl get pods -A
查看pod pending状态原因,括号中内容是参数
kubectl describe/delete/logs pod (name) -n (namespace)
查看集群节点信息
kubectl get nodes
结论:coredns只有在网络安装成功之后才能够被调度到节点上运行,
2、calico的pod一直处于拉取镜像失败问题导致网络一直不好
解决:手动下载calico的docker镜像到所有节点
calico/kube-controllers v3.25.0 5e785d005ccc 5 weeks ago 71.6MB
calico/cni v3.25.0 d70a5947d57e 5 weeks ago 198MB
calico/pod2daemon-flexvol v3.25.0 ed8b7bbb113f 5 weeks ago 14.6MB
calico/node v3.25.0 08616d26b8e7 5 weeks ago 245MB
例子:

docker pull calico/cni:v3.25.0

所有pod都正常运行

节点状态变成Ready

使用rancher管理k8s集群
docker打包镜像和容器

Rancher中导入新搭建的Kubernetes集群

通过kubelet下载镜像失败问题,更换docker镜像源解决

{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://rrnv06ig.mirror.aliyuncs.com","https://reg-mirror.qiniu.com","https://docker.mirrors.ustc.edu.cn"]
}
添加成功之后集群状态是Active

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注