欢迎来到飞鸟慕鱼博客,开始您的技术之旅!
当前位置: 首页知识笔记正文

kubernetes高可用集群搭建,k8s scheduler 高可用

终极管理员 知识笔记 137阅读

目录 一、系统基础设置1.1、关闭防火墙1.2、关闭selinux1.3、关闭swap1.4、设置hostname1.5、将桥接的IPv4流量传递到iptables的链1.6、 时间同步1.7、 安装iproute-tc 二、所有master节点部署keepalived2.1 安装相关包和keepalived2.2配置master节点2.3 启动和检查 三、 部署haproxy3.1 安装3.2 配置3.3 启动和检查 四、 所有节点安装Docker/kubeadm/kubelet4.1 安装Docker4.2 cri-dockerd安装4.3 添加阿里云YUM软件源4.4 安装kubeadmkubelet和kubectl4.4 安装kubeadm-cni 五、部署Kubernetes Master5.1 创建kubeadm配置文件5.2 在master1节点执行 六、安装集群网络七、master2节点加入集群7.1 复制密钥及相关文件7.2 master2加入集群7.3master3加入集群7.4检查状态 八、加入Kubernetes Node8.1在node1、node2、node3上执行join命令8.2集群网络重新安装因为添加了新的node节点8.3检查状态 九、测试kubernetes集群

一、系统基础设置 1.1、关闭防火墙

systemctl stop firewalldsystemctldisable firewalld
1.2、关闭selinux
sed -i s/enforcing/disabled/ /etc/selinux/config  # 永久setenforce 0  # 临时
1.3、关闭swap
swapoff -a  # 临时sed -ri s/.*swap.*/#&/ /etc/fstab    # 永久
1.4、设置hostname
hostnamectl set-hostname zxhy-master cat >> /etc/hosts << EOF192.168.0.15 zxhy-vip192.168.0.14 zxhy-master192.168.0.222 zxhy-slave1192.168.0.77 zxhy-slave2192.168.0.188 zxhy-slave3192.168.0.193 zxhy-slave4192.168.0.227 zxhy-slave5EOF
1.5、将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables  1net.bridge.bridge-nf-call-iptables  1EOFsysctl --system  # 生效
1.6、 时间同步
yum install ntpdate -yntpdate time.windows.com
1.7、 安装iproute-tc
yum install iproute-tc -y
二、所有master节点部署keepalived 2.1 安装相关包和keepalived
yum install -y conntrack-tools libseccomp libtool-ltdlyum install -y keepalived
2.2配置master节点

master1节点配置

cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalivedglobal_defs {   router_id k8s}vrrp_script check_haproxy {    script killall -0 haproxy    interval 3    weight -2    fall 10    rise 2}vrrp_instance VI_1 {    state MASTER     interface eth0    virtual_router_id 51    priority 250    advert_int 1    authentication {        auth_type PASS        auth_pass ceb1b3ec013d66163d6ab    }    virtual_ipaddress {        192.168.0.15    }    track_script {        check_haproxy    }}EOF

master2节点配置

cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalivedglobal_defs {   router_id k8s}vrrp_script check_haproxy {    script killall -0 haproxy    interval 3    weight -2    fall 10    rise 2}vrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 51    priority 250    advert_int 1    authentication {        auth_type PASS        auth_pass ceb1b3ec013d66163d6ab    }    virtual_ipaddress {        192.168.0.14    }    track_script {        check_haproxy    }}EOF
2.3 启动和检查

在三台master节点都执行

# 启动keepalived$ systemctl start keepalived.service设置开机启动$ systemctl enable keepalived.service# 查看启动状态$ systemctl status keepalived.service

启动后查看master1的网卡信息

ip a s eth0

如果是云服务器搭建的话记得云服务器管理平台上申请虚拟IP地址然后绑定虚拟IP地址到三台主节点服务器上然后添加相应的网络策略否侧ping不通虚拟ip

三、 部署haproxy 3.1 安装
yum install -y haproxy
3.2 配置

两台master节点的配置均相同配置中声明了后端代理的两个master节点服务器指定了haproxy运行的端口为16443等因此16443端口为集群的入口

cat > /etc/haproxy/haproxy.cfg << EOF#---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global    # to have these messages end up in /var/log/haproxy.log you will    # need to:    # 1) configure syslog to accept network log events.  This is done    #    by adding the -r option to the SYSLOGD_OPTIONS in    #    /etc/sysconfig/syslog    # 2) configure local2 events to go to the /var/log/haproxy.log    #   file. A line like the following can be added to    #   /etc/sysconfig/syslog    #    #    local2.*                       /var/log/haproxy.log    #    log         127.0.0.1 local2        chroot      /var/lib/haproxy    pidfile     /var/run/haproxy.pid    maxconn     4000    user        haproxy    group       haproxy    daemon            # turn on stats unix socket    stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the listen and backend sections will# use if not designated in their block#---------------------------------------------------------------------  defaults    mode                    http    log                     global    option                  httplog    option                  dontlognull    option http-server-close    option forwardfor       except 127.0.0.0/8    option                  redispatch    retries                 3    timeout http-request    10s    timeout queue           1m    timeout connect         10s    timeout client          1m    timeout server          1m    timeout http-keep-alive 10s    timeout check           10s    maxconn                 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends#--------------------------------------------------------------------- frontend kubernetes-apiserver    mode                 tcp    bind                 *:16443    option               tcplog    default_backend      kubernetes-apiserver    #---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver    mode        tcp    balance     roundrobin    server      zxhy-nacos   192.168.0.14:6443 check    server      zxhy-redis   192.168.0.77:6443 check    server      zxhy-mysql   192.168.0.222:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats    bind                 *:1080    stats auth           admin:awesomePassword    stats refresh        5s    stats realm          HAProxy\ Statistics    stats uri            /admin?statsEOF
3.3 启动和检查

三台master都启动

# 设置开机启动$ systemctl enable haproxy# 开启haproxy$ systemctl start haproxy# 查看启动状态$ systemctl status haproxy
四、 所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI容器运行时为Docker因此先安装Docker。

4.1 安装Docker
$ wget  -O /etc/yum.repos.d/docker-ce.repo$ yum -y install docker-ce-24.0.5.el7$ systemctl enable docker && systemctl start docker$ docker --versionDocker version 24.0.5, build e68fc7a
$ cat > /etc/docker/daemon.json << EOF{  registry-mirrors: [ 4.2 cri-dockerd安装 

下载 cri-dockerd 安装包

cd  /optwget  

安装服务

yum install -y cri-dockerd-0.3.6.20231018204925.877dc6a4-0.el7.x86_64.rpmvim /usr/lib/systemd/system/cri-docker.service#添加镜像源--pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.7systemctl daemon-reloadvim /usr/lib/systemd/system/cri-docker.socket

查看服务启动状态

# 设置开机启动$ systemctl enable cri-docker# 开启cri-docker$ systemctl start cri-docker# 查看启动状态$ systemctl status cri-docker

查看CRI服务是否被禁止

vi /etc/containerd/config.toml #如果disabled_plugins中包含cri删除“cri”即可#disabled_plugins  [“cri”]disabled_plugins  []

重启容器运行时

systemctl restart containerd 
4.3 添加阿里云YUM软件源
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]nameKubernetesbaseurl  4.4 安装kubeadmkubelet和kubectl 

由于版本更新频繁这里指定版本号部署

$ yum install -y kubelet-1.24.7 kubeadm-1.24.7 kubectl-1.24.7$ systemctl enable kubelet
4.4 安装kubeadm-cni

network plugin is not ready: cni config uninitialized

五、部署Kubernetes Master 5.1 创建kubeadm配置文件

在具有vip的master上操作这里为master1

$ mkdir /usr/local/kubernetes/manifests -p$ cd /usr/local/kubernetes/manifests/$ vi kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta3kind: InitConfigurationbootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationlocalAPIEndpoint:  advertiseAddress: 192.168.0.14  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/cri-dockerd.sock  imagePullPolicy: IfNotPresent  name: zxhy-nacos  taints: null---apiServer:  certSANs:    - zxhy-nacos    - zxhy-redis    - zxhy-mysql    - zxhy-vip    - 192.168.0.14    - 192.168.0.222    - 192.168.0.77    - 192.168.0.15    - 127.0.0.1  extraArgs:    authorization-mode: Node,RBAC  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: zxhy-vip:16443controllerManager: {}dns: {}etcd:  local:        dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.24.7networking:   dnsDomain: cluster.local    podSubnet: 10.244.0.0/16  serviceSubnet: 10.1.0.0/16scheduler: {}
5.2 在master1节点执行
$ kubeadm init --config kubeadm-config.yamlexport KUBECONFIG/etc/kubernetes/admin.conf

按照提示配置环境变量使用kubectl工具

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get nodes$ kubectl get pods -n kube-system

按照提示保存以下内容一会要使用

kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --cri-socketunix:///var/run/cri-dockerd.sock#如果忘记复制也可以利用这个命令重新生成下加入命令kubeadm token create --print-join-command
六、安装集群网络

从官方地址获取到flannel的yaml在master1上执行

mkdir /usr/local/kubernetes/manifests/flannelcd /usr/local/kubernetes/manifests/flannelwget -c  

安装flannel网络

kubectl apply -f kube-flannel.yml 

检查

kubectl get pods -n kube-system
七、master2节点加入集群 7.1 复制密钥及相关文件

从master1复制密钥及相关文件到master2

# ssh root192.168.0.222 mkdir -p /etc/kubernetes/pki/etcd# scp /etc/kubernetes/admin.conf root192.168.0.222:/etc/kubernetes   # scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root192.168.0.222:/etc/kubernetes/pki   # scp /etc/kubernetes/pki/etcd/ca.* root192.168.0.222:/etc/kubernetes/pki/etcd
7.2 master2加入集群

执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群

kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --control-plane --cri-socketunix:///var/run/cri-dockerd.sock

按照提示配置环境变量使用kubectl工具

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get nodes$ kubectl get pods -n kube-system
7.3master3加入集群

跟节点2同样的操作

7.4检查状态
kubectl get nodekubectl get pods --all-namespaces
八、加入Kubernetes Node 8.1在node1、node2、node3上执行join命令

向集群添加新节点执行在kubeadm init输出的kubeadm join命令

kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --cri-socketunix:///var/run/cri-dockerd.sock
8.2集群网络重新安装因为添加了新的node节点

所有节点加入完成后安装flannel网络

#进入flannel网络cd /usr/local/kubernetes/manifests/flannel#删除之前的网络kubectl delete -f kube-flannel.yml #重新初始化的网络kubectl apply -f kube-flannel.yml 
8.3检查状态
kubectl get nodekubectl get pods --all-namespaces
九、测试kubernetes集群

在Kubernetes集群中创建一个pod验证是否正常运行

$ kubectl create deployment nginx --imagenginx$ kubectl expose deployment nginx --port80 --typeNodePort$ kubectl get pod,svc

标签:
声明:无特别说明,转载请标明本文来源!