1. 部署环境
- 主机列表:
主机名 | centos版本 | ip | docker version | 主机配置 | calico版本 | keepalived版本 | 版本 | 备注 |
---|---|---|---|---|---|---|---|---|
k8s-master-01 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.87 | 19.03.13 | 4C-8G | v3.14.2 | v1.3.5 | k8s-v1.18.8 | control-plane |
k8s-master-02 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.128 | 19.03.13 | 4C-8G | v3.14.2 | v1.3.5 | k8s-v1.18.8 | control-plane |
k8s-master-03 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.144 | 19.03.13 | 4C-8G | v3.14.2 | v1.3.5 | k8s-v1.18.8 | control-plane |
k8s-node-01 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.132 | 19.03.13 | 4C-8G | v3.14.2 | / | k8s-v1.18.8 | worker-node |
k8s-node-02 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.74 | 19.03.13 | 4C-8G | v3.14.2 | / | k8s-v1.18.8 | worker-node |
k8s-node-03 | CentOS Linux release 7.6.1810 (Core) | 172.17.252.229 | 19.03.13 | 4C-8G | v3.14.2 | / | k8s-v1.18.8 | worker-node |
- 组件详情:
组件名 | 镜像 | 备注 |
---|---|---|
calico | calico/node:v3.14.2 calico/pod2daemon-flexvol:v3.14.2 calico/cni:v3.14.2 calico/kube-controllers:v3.14.2 | 网络插件 |
coredns | registry.aliyuncs.com/google_containers/coredns:1.6.7 | 内部dns |
etcd | registry.aliyuncs.com/google_containers/etcd:3.4.3-0 | 集群数据存储 |
kube-apiserver | registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.8 | api入口 |
kube-controller | registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.8 | 控制器 |
kube-proxy | registry.aliyuncs.com/google_containers/kube-proxy:v1.18.8 | service→ pod 的路由和转发 |
kube-scheduler | registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.8 | 调度器 |
metrics-server | mirrorgooglecontainers/metrics-server-amd64:v0.3.6 | 集群指标(实时监控) |
kubernetes-dashboard | kubernetesui/dashboard:v2.0.0 | 仪表盘,访问方式nodeport |
2. 高可用架构图
本次部署采用kubeadm方式搭建高可用k8s集群, k8s集群高可用实际是k8s各核心组件的高可用, 也就是master节点高可用
3. 安装准备工作
1. 所有主机都要操作
hostnamectl set-hostname k8s-master-01
2. 修改hosts文件
cat <<EOF >> /etc/hosts
172.17.252.87 k8s-master-01
172.17.252.128 k8s-master-02
172.17.252.144 k8s-master-03
172.17.252.132 k8s-node-01
172.17.252.74 k8s-node-02
172.17.252.229 k8s-node-03
EOF
3. 脚本批量执行
#!/bin/bash
# kubeadm安装k8s
# ***需要设置所有主机的主机名***
# 安装所需的基本环境
# cat <<EOF >> /etc/hosts
# 10.100.10.10 k8s-master-01
# 10.100.10.11 k8s-master-02
# 10.100.10.12 k8s-master-03
# 10.100.10.20 k8s-node-01
# EOF
# 增加ssh公钥
if [ ! -d '/root/.ssh' ];then
mkdir /root/.ssh
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
fi
if [ `grep -c 'AAAAB3NzaC1yc2EAAAADAQABAAABAQDVO9Z' /root/.ssh/authorized_keys` -eq '0' ];then
echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVO9Z8S0Bnal6FgZ4O4zFEvFSIPpnR00GiRmGG2fXetwClpW1QE8s2+cKGKjmYBWsLI9DfkcvlKISb030d18g/uNnbnRrFgDJml202XRFgXKIzlgq9XBAiT9TBJ/qKGbzuO2wDyufsayBlqR+yK17C+YoX9OrcxsQSWXTHrYqaXSwmiVT+Ui4w7a4KJ+sRjDF6JfEJdT8ODGTI5L9h0p+gEqN1UQSxgsj4+0+P704ln/Nw950xCigeHnNcp2COmsUv7qYgaWeNaV0Uv6qJkct8AqTJtm1LujlKAcK9jVGJuQurtuAW5pArp3IhxwJVhsu88jjBVPwh0g8Uyiw7Bth5 root@k8s-master-01' >> /root/.ssh/authorized_keys
fi
# set -e
K8S_VERSION=1.18.15
# 关闭swap
sed -i 's/^[^#].*swap*/#&/g' /etc/fstab
swapoff -a
# 关闭防火墙和selinux
systemctl disable firewalld && systemctl stop firewalld
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0
# 清空规则
iptables -F
# 安装nfs-utils
yum install -y nfs-utils
# 安装ntp服务
yum install -y ntp
systemctl start ntpd && systemctl enable ntpd
# 调整系统时区
systemctl restart rsyslog
systemctl restart crond
# 安装docker
# yum remove -y docker*
yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-19.03.12 docker-ce-cli-19.03.12 containerd.io
systemctl start docker
# 判断挂载点
if mountpoint /data; then
mkdir -pv /data/docker
cat << EOF > /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -g /data/docker -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
fi
# 设置k8s内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 加载内核参数
modprobe br_netfilter
sysctl --system
sysctl -p /etc/sysctl.d/k8s.conf
# kube-proxy开启ipvs的前置条件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install ipvsadm ipset -y
# 修改docker参数
cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
"insecure-registries": [
"harbor.miduchina.com",
"harbor-nh.miduchina.com"
],
"bip": "192.168.50.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts": {"max-size":"500m", "max-file":"2"}
}
EOF
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
# 添加阿里云kubernetes的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubelet,kubeadm,kubectl
yum install -y kubelet-${K8S_VERSION} kubeadm-${K8S_VERSION} kubectl-${K8S_VERSION} && systemctl enable kubelet
echo "KUBELET_EXTRA_ARGS=--allowed-unsafe-sysctls 'net.*'" > /etc/sysconfig/kubelet
4. 安装并配置keepalived
1. 安装keeplived
#!/bin/bash
yum install -y ipvsadm popt popt-devel libnl libnl-devel libnl3-devel libnfnetlink libnfnetlink-devel net-snmp-devel openssl openssl-devel
mkdir -pv /opt/keepalived
mkdir -pv /etc/keepalived
wget -P /opt/keepalived https://www.keepalived.org/software/keepalived-2.1.5.tar.gz
tar -zxf /opt/keepalived/keepalived-2.1.5.tar.gz -C /opt/keepalived/
cd /opt/keepalived/keepalived-2.1.5 && ./configure --with-init=systemd --with-systemdsystemunitdir=/usr/lib/systemd/system --prefix=/usr/local/keepalived --with-run-dir=/usr/local/keepalived/run
make && make install
ln -s /usr/local/keepalived/sbin/keepalived /usr/bin/keepalived
keepalived --version
systemctl enable keepalived
mkdir /usr/local/keepalived/run
2. 配置keeplived
根据主机名配置 /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id k8s-master-03 #router_id 机器标识,通常为hostname,但不一定非得是hostname。故障发生时,邮件通知会用到。
}
vrrp_instance VI_1 { #vrrp实例定义部分
state BACKUP #设置lvs的状态,MASTER和BACKUP两种,必须大写
interface bond0 #设置对外服务的接口
virtual_router_id 2 #设置虚拟路由标示,这个标示是一个数字,同一个vrrp实例使用唯一标示
priority 80 #定义优先级,数字越大优先级越高,在一个vrrp——instance下,master的优先级必须大于backup
advert_int 1 #设定master与backup负载均衡器之间同步检查的时间间隔,单位是秒
authentication { #设置验证类型和密码
auth_type PASS #主要有PASS和AH两种
auth_pass 1111 #验证密码,同一个vrrp_instance下MASTER和BACKUP密码必须相同
}
virtual_ipaddress { #设置虚拟ip地址,可以设置多个,每行一个
10.100.10.2
}
}
5. 初始化master01
1. master01上起虚ip
# 起虚ip目的是为了执行master01的初始化,待初始化完成后去掉该虚ip
ifconfig eth0:2 172.27.34.100 netmask 255.255.255.0 up
2. 初始化init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 0s # 此处设置过期时间
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.252.225 # 此处改为本机IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master-01 # 修改主机名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
certificateKey: f31fa8563cf08a3c6f304192552f837c92d50c3667453fef94cad1e9473a6912
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.17.252.225:6443" # VIP:PORT
controllerManager:
extraArgs:
bind-address: "0.0.0.0"
port: "10252"
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
heartbeat-interval: "6000"
election-timeout: "30000"
imageRepository: registry.aliyuncs.com/google_containers # 使用国内镜像仓库
kind: ClusterConfiguration
kubernetesVersion: v1.18.15 # 版本号
networking:
dnsDomain: cluster.local
serviceSubnet: 10.255.128.0/17
podSubnet: 10.255.0.0/17 # pod子网,和calico中要一致
scheduler:
extraArgs:
bind-address: "0.0.0.0"
port: "10251"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
3. 执行初始化名令
# 通过将参数 --upload-certs 添加到 kubeadm init,你可以将控制平面证书临时上传到集群中的 Secret。 请注意,此 Secret 将在 2 小时后自动过期
kubeadm init --config=init-config.yaml --upload-certs
可以用此命令生成certs
kubeadm init phase upload-certs --upload-certs
4. master02,master03加入集群
kubeadm join 172.17.252.225:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:a2618fc28ad359e1ada6e0b5b86d378b9c90d6712c0286e7fa827c19a08a3531 \
--control-plane --certificate-key 5b7e4f23d40fbaee4e6021bb6a27f3b28017093bd969c6184d8cd3e06d56726d
注:
使用--upload-certs后无需同步pki文件
scp_control_plane.sh
#!/bin/bash
ssh_args="StrictHostKeyChecking no"
private_key="/opt/sshkeys/k8s"
USER=root # 可自己设置
CONTROL_PLANE_IPS="10.100.10.11 10.100.10.12"
for host in ${CONTROL_PLANE_IPS}; do
echo "*** ${host} ***"
ssh -o "${ssh_args}" -i ${private_key} "${USER}"@"${host}" "mkdir -p /etc/kubernetes/pki/etcd"
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/ca.crt "${USER}"@"${host}":/etc/kubernetes/pki/ca.crt
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/ca.key "${USER}"@"${host}":/etc/kubernetes/pki/ca.key
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/sa.key "${USER}"@"${host}":/etc/kubernetes/pki/sa.key
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/sa.pub "${USER}"@"${host}":/etc/kubernetes/pki/sa.pub
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@"${host}":/etc/kubernetes/pki/front-proxy-ca.crt
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@"${host}":/etc/kubernetes/pki/front-proxy-ca.key
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/etcd/ca.crt "${USER}"@"${host}":/etc/kubernetes/pki/etcd/ca.crt
scp -o "${ssh_args}" -i ${private_key} /etc/kubernetes/pki/etcd/ca.key "${USER}"@"${host}":/etc/kubernetes/pki/etcd/ca.key
done
6. 启动keeplived
# master01上去掉vip
ifconfig ens160:2 172.27.34.222 netmask 255.255.255.0 down
systemctl start keepalived && systemctl enable keepalived