Ïê½â K8S ¸ß¿ÉÓð²ÅÅ£¬³¬Ïêϸ£¡
Ò»¡¢Ç°ÑÔ
¶þ¡¢»ù´¡ÇéÐΰ²ÅÅ
1£©Ç°ÆÚ×¼±¸£¨ËùÓнڵ㣩
2£©×°ÖÃÈÝÆ÷ docker£¨ËùÓнڵ㣩
3£©ÉèÖà k8s yum Ô´£¨ËùÓнڵ㣩
4£©½« sandbox_image ¾µÏñÔ´ÉèÖÃΪ°¢ÀïÔÆ google_containers ¾µÏñÔ´£¨ËùÓнڵ㣩
5£©ÉèÖà containerd cgroup Çý¶¯³ÌÐò systemd£¨ËùÓнڵ㣩
6£©×îÏÈ×°Öà kubeadm£¬kubelet ºÍ kubectl£¨master ½Úµã£©
7£©Ê¹Óà kubeadm ³õʼ»¯¼¯Èº£¨master ½Úµã£©
8£©×°Öà Pod ÍøÂç²å¼þ£¨CNI£ºContainer Network Interface£©(master)
9£©node ½Úµã¼ÓÈë k8s ¼¯Èº
10£©ÉèÖà IPVS
11£©¼¯Èº¸ß¿ÉÓÃÉèÖÃ
12£©°²ÅÅ Nginx+Keepalived ¸ß¿ÉÓøºÔØƽºâÆ÷
Èý¡¢k8s ÖÎÀíƽ̨ dashboard ÇéÐΰ²ÅÅ
1£©dashboard °²ÅÅ
2£©½¨ÉèµÇÈÎÃü»§
3£©ÉèÖà hosts µÇ¼ dashboard web
ËÄ¡¢k8s ¾µÏñ¿ÍÕ» harbor ÇéÐΰ²ÅÅ
1£©×°Öà helm
2£©ÉèÖà hosts
3£©½¨Éè stl Ö¤Êé
4£©×°Öà ingress
5£©×°Öà nfs
6£©½¨Éè nfs provisioner ºÍ³¤ÆÚ»¯´æ´¢ SC
7£©°²ÅÅ Harbor£¨Https ·½·¨£©
Ò»¡¢Ç°ÑÔ
¹ÙÍø£ºhttps://kubernetes.io/
¹Ù·½Îĵµ£ºhttps://kubernetes.io/zh-cn/docs/home/
¶þ¡¢»ù´¡ÇéÐΰ²ÅÅ
1£©Ç°ÆÚ×¼±¸£¨ËùÓнڵ㣩
1¡¢ÐÞ¸ÄÖ÷»úÃûºÍÉèÖà hosts
ÏÈ°²ÅÅ 1master ºÍ 2node ½Úµã£¬ºóÃæÔÙ¼ÓÒ»¸ö master ½Úµã
# ÔÚ192.168.0.113Ö´ÐÐhostnamectl set-hostname k8s-master-168-0-113# ÔÚ192.168.0.114Ö´ÐÐhostnamectl set-hostname k8s-node1-168-0-114# ÔÚ192.168.0.115Ö´ÐÐhostnamectl set-hostname k8s-node2-168-0-115
µÇ¼ºó¸´ÖÆ
ÉèÖÃ hosts
cat >> /etc/hosts<<EOF192.168.0.113 k8s-master-168-0-113192.168.0.114 k8s-node1-168-0-114192.168.0.115 k8s-node2-168-0-115EOF
µÇ¼ºó¸´ÖÆ
2¡¢ÉèÖà ssh »¥ÐÅ
# Ö±½ÓÒ»Ö±»Ø³µ¾ÍÐÐssh-keygenssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
µÇ¼ºó¸´ÖÆ
3¡¢Ê±¼äͬ²½
yum install chrony -ysystemctl start chronydsystemctl enable chronydchronyc sources
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
4¡¢¹Ø±Õ·À»ðǽ
systemctl stop firewalldsystemctl disable firewalld
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
5¡¢¹Ø±Õ swap
# ÔÝʱ¹Ø±Õ£»¹Ø±ÕswapÖ÷ÒªÊÇΪÁËÐÔÄÜ˼Á¿swapoff -a# ¿ÉÒÔͨ¹ýÕâ¸öÏÂÁîÉó²éswapÊÇ·ñ¹Ø±ÕÁËfree# ÓÀÊÀ¹Ø±Õsed -ri 's/.*swap.*/#&/' /etc/fstab
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
6¡¢½ûÓÃ SELinux
# ÔÝʱ¹Ø±Õsetenforce 0# ÓÀÊÀ½ûÓÃsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
7¡¢ÔÊÐí iptables ¼ì²éÇŽÓÁ÷Á¿£¨¿ÉÑ¡£¬ËùÓнڵ㣩
ÈôÒªÏÔʽ¼ÓÔØ´ËÄ£¿é£¬ÇëÔËÐÐ sudo modprobe br_netfilter£¬Í¨¹ýÔËÐÐ lsmod | grep br_netfilter À´ÑéÖ¤ br_netfilter Ä£¿éÊÇ·ñÒѼÓÔØ£¬
sudo modprobe br_netfilterlsmod | grep br_netfilter
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
ΪÁËÈà Linux ½ÚµãµÄ iptables Äܹ»×¼È·Éó²éÇŽÓÁ÷Á¿£¬ÇëÈ·ÈÏ sysctl ÉèÖÃÖÐµÄ net.bridge.bridge-nf-call-iptables ÉèÖÃΪ 1¡£ÀýÈ磺
cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOFsudo modprobe overlaysudo modprobe br_netfilter# ÉèÖÃËùÐèµÄ sysctl ²ÎÊý£¬²ÎÊýÔÚÖØÐÂÆô¶¯ºó¼á³ÖÎȹÌcat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# Ó¦Óà sysctl ²ÎÊý¶ø²»ÖØÐÂÆô¶¯sudo sysctl --system
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
2£©×°ÖÃÈÝÆ÷ docker£¨ËùÓнڵ㣩
ÌáÐÑ£ºv1.24 ֮ǰµÄ Kubernetes °æ±¾°üÀ¨Óë Docker Engine µÄÖ±½Ó¼¯³É£¬Ê¹ÓÃÃûΪ dockershim µÄ×é¼þ¡£ÕâÖÖÌØÊâµÄÖ±½ÓÕûºÏ²»ÔÙÊÇ Kubernetes µÄÒ»²¿·Ö £¨Õâ´Îɾ³ý±»×÷Ϊ v1.20 ¿¯Ðа汾µÄÒ»²¿·ÖÐû²¼£©¡£Äã¿ÉÒÔÔĶÁ¼ì²é Dockershim ÆúÓÃÊÇ·ñ»áÓ°ÏìÄã ÒÔÏàʶ´Ëɾ³ý¿ÉÄÜ»áÔõÑùÓ°ÏìÄã¡£ÒªÏàʶÔõÑùʹÓà dockershim ¾ÙÐÐǨá㣬Çë²ÎÔÄ´Ó dockershim Ǩáã¡£
# ÉèÖÃyumÔ´cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/# centos7wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo# centos8wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo# ×°ÖÃyum-config-managerÉèÖù¤¾ßyum -y install yum-utils# ÉèÖÃyumÔ´yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# ×°ÖÃdocker-ce°æ±¾yum install -y docker-ce# Æô¶¯systemctl start docker# ¿ª»ú×ÔÆôsystemctl enable docker# Éó²é°æ±¾ºÅdocker --version# Éó²é°æ±¾ÏêϸÐÅÏ¢docker version# Docker¾µÏñÔ´ÉèÖÃ# ÐÞ¸ÄÎļþ /etc/docker/daemon.json£¬Ã»ÓÐÕâ¸öÎļþ¾Í½¨Éè# Ìí¼ÓÒÔÏÂÄÚÈݺó£¬ÖØÆôdockerЧÀÍ£ºcat >/etc/docker/daemon.json<<EOF{ "registry-mirrors": ["http://hub-mirror.c.163.com"]}EOF# ¼ÓÔØsystemctl reload docker# Éó²ésystemctl status docker containerd
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿dockerd ÏÖʵÕæʵŲÓõÄÕÕ¾É containerd µÄ api ½Ó¿Ú£¬containerd ÊÇ dockerd ºÍ runC Ö®¼äµÄÒ»ÆäÖÐÐĽ»Á÷×é¼þ¡£ÒÔÊÇÆô¶¯ docker ЧÀ͵Äʱ¼ä£¬Ò²»áÆô¶¯ containerd ЧÀ͵ġ£
3£©ÉèÖà k8s yum Ô´£¨ËùÓнڵ㣩
cat > /etc/yum.repos.d/kubernetes.repo << EOF[k8s]name=k8senabled=1gpgcheck=0baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/EOF
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
4£©½« sandbox_image ¾µÏñÔ´ÉèÖÃΪ°¢ÀïÔÆ google_containers ¾µÏñÔ´£¨ËùÓнڵ㣩
# µ¼³öĬÈÏÉèÖã¬config.tomlÕâ¸öÎļþĬÈÏÊDz»±£´æµÄcontainerd config default > /etc/containerd/config.tomlgrep sandbox_image /etc/containerd/config.tomlsed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.tomlgrep sandbox_image /etc/containerd/config.toml
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
5£©ÉèÖà containerd cgroup Çý¶¯³ÌÐò systemd£¨ËùÓнڵ㣩
kubernets ×Ô£ö 1.24.0 ºó£¬¾Í²»ÔÙʹÓà docker.shim£¬Ìæ»»½ÓÄÉ containerd ×÷ΪÈÝÆ÷ÔËÐÐʱ¶Ëµã¡£Òò´ËÐèҪװÖà containerd£¨ÔÚ docker µÄ»ù´¡ÏÂ×°Öã©£¬ÉÏÃæ×°Öà docker µÄʱ¼ä¾Í×Ô¶¯×°ÖÃÁË containerd ÁË¡£ÕâÀïµÄ docker Ö»ÊÇ×÷Ϊ¿Í»§¶Ë°ÕÁË¡£ÈÝÆ÷ÒýÇæÕÕ¾É containerd¡£
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml# Ó¦ÓÃËùÓиü¸Äºó,ÖØÐÂÆô¶¯containerdsystemctl restart containerd
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
6£©×îÏÈ×°Öà kubeadm£¬kubelet ºÍ kubectl£¨master ½Úµã£©
# ²»Ö¸¶¨°æ±¾¾ÍÊÇ×îа汾£¬Ä¿½ñ×îаæ¾ÍÊÇ1.24.1yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1 --disableexcludes=kubernetes# disableexcludes=kubernetes£º½ûµô³ýÁËÕâ¸ökubernetesÖ®ÍâµÄ±ðµÄ¿ÍÕ»# ÉèÖÃΪ¿ª»ú×ÔÆô²¢ÏÖÔÚÁ¬Ã¦Æô¶¯Ð§ÀÍ --now£ºÁ¬Ã¦Æô¶¯Ð§ÀÍsystemctl enable --now kubelet# Éó²é״̬£¬ÕâÀïÐèÒªÆÚ´ýÒ»¶Îʱ¼äÔÙÉó²éЧÀÍ״̬£¬Æô¶¯»áÓеãÂýsystemctl status kubelet
µÇ¼ºó¸´ÖÆ
Éó²éÈÕÖ¾£¬·¢Ã÷Óб¨´í£¬±¨´íÈçÏ£º
kubelet.service: Main process exited, code=exited, status=1/FAILURE kubelet.service: Failed with result ‘exit-code’.
¡¾Ú¹ÊÍ¡¿ÖØÐÂ×°Ö㨻òµÚÒ»´Î×°Öã©k8s£¬Î´¾ÓÉ kubeadm init »òÕß kubeadm join ºó£¬kubelet »áÒ»Ö±ÖØÆô£¬Õâ¸öÊÇÕý³£Õ÷Ïó¡¡£¬Ö´ÐÐ init »ò join ºóÎÊÌâ»á×Ô¶¯½â¾ö£¬¶Ô´Ë¹ÙÍøÓÐÈçÏÂÐÎò£¬Ò²¾ÍÊÇ´Ëʱ²»±ØÆÊÎö kubelet.service¡£
Éó²é°æ±¾
kubectl versionyum info kubeadm
µÇ¼ºó¸´ÖÆ
7£©Ê¹Óà kubeadm ³õʼ»¯¼¯Èº£¨master ½Úµã£©
×îºÃÌáÇ°°Ñ¾µÏñÏÂÔغã¬ÕâÑù×°Öÿì
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.1docker pull registry.aliyuncs.com/google_containers/pause:3.7docker pull registry.aliyuncs.com/google_containers/etcd:3.5.3-0docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
µÇ¼ºó¸´ÖÆ
¼¯Èº³õʼ»¯
kubeadm init --apiserver-advertise-address=192.168.0.113 --image-repository registry.aliyuncs.com/google_containers --control-plane-endpoint=cluster-endpoint --kubernetes-version v1.24.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --v=5# ¨Cimage-repository string£º Õâ¸öÓÃÓÚÖ¸¶¨´ÓʲôλÖÃÀ´ÀÈ¡¾µÏñ£¨1.13°æ±¾²ÅÓеģ©£¬Ä¬ÈÏÖµÊÇk8s.gcr.io£¬ÎÒÃǽ«ÆäÖ¸¶¨Îªº£ÄÚ¾µÏñµØµã£ºregistry.aliyuncs.com/google_containers# ¨Ckubernetes-version string£º Ö¸¶¨kubenets°æ±¾ºÅ£¬Ä¬ÈÏÖµÊÇstable-1£¬»áµ¼Ö´Óhttps://dl.k8s.io/release/stable-1.txtÏÂÔØ×îеİ汾ºÅ£¬ÎÒÃÇ¿ÉÒÔ½«ÆäÖ¸¶¨ÎªÀο¿°æ±¾£¨v1.22.1£©À´Ìø¹ýÍøÂçÇëÇó¡£# ¨Capiserver-advertise-address Ö¸Ã÷Óà Master µÄÄĸö interface Óë Cluster µÄÆäËû½ÚµãͨѶ¡£ÈôÊÇ Master Óжà¸ö interface£¬½¨ÒéÃ÷È·Ö¸¶¨£¬ÈôÊDz»Ö¸¶¨£¬kubeadm »á×Ô¶¯Ñ¡ÔñÓÐĬÈÏÍø¹ØµÄ interface¡£ÕâÀïµÄipΪmaster½Úµãip£¬¼ÇµÃÌæ»»¡£# ¨Cpod-network-cidr Ö¸¶¨ Pod ÍøÂçµÄ¹æÄ£¡£Kubernetes Ö§³Ö¶àÖÖÍøÂç¼Æ»®£¬²¢ÇÒ²î±ðÍøÂç¼Æ»®¶Ô ¨Cpod-network-cidrÓÐ×Ô¼ºµÄÒªÇó£¬ÕâÀïÉèÖÃΪ10.244.0.0/16 ÊÇÓÉÓÚÎÒÃǽ«Ê¹Óà flannel ÍøÂç¼Æ»®£¬±ØÐèÉèÖóÉÕâ¸ö CIDR¡£# --control-plane-endpoint cluster-endpoint ÊÇÓ³Éäµ½¸Ã IP µÄ×Ô½ç˵ DNS Ãû³Æ£¬ÕâÀïÉèÖÃhostsÓ³É䣺192.168.0.113 cluster-endpoint¡£ Õ⽫ÔÊÐíÄ㽫 --control-plane-endpoint=cluster-endpoint ת´ï¸ø kubeadm init£¬²¢½«ÏàͬµÄ DNS Ãû³Æת´ï¸ø kubeadm join¡£ ÉÔºóÄã¿ÉÒÔÐÞ¸Ä cluster-endpoint ÒÔÖ¸Ïò¸ß¿ÉÓÃÐԼƻ®ÖеĸºÔØƽºâÆ÷µÄµØµã¡£
µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿kubeadm ²»Ö§³Ö½«Ã»ÓÐ –control-plane-endpoint ²ÎÊýµÄµ¥¸ö¿ØÖÆƽÃ漯Ⱥת»»Îª¸ß¿ÉÓÃÐÔ¼¯Èº¡£
ÖØÖÃÔÙ³õʼ»¯
kubeadm resetrm -fr ~/.kube/ /etc/kubernetes/* var/lib/etcd/*kubeadm init --apiserver-advertise-address=192.168.0.113 --image-repository registry.aliyuncs.com/google_containers --control-plane-endpoint=cluster-endpoint --kubernetes-version v1.24.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --v=5# ¨Cimage-repository string£º Õâ¸öÓÃÓÚÖ¸¶¨´ÓʲôλÖÃÀ´ÀÈ¡¾µÏñ£¨1.13°æ±¾²ÅÓеģ©£¬Ä¬ÈÏÖµÊÇk8s.gcr.io£¬ÎÒÃǽ«ÆäÖ¸¶¨Îªº£ÄÚ¾µÏñµØµã£ºregistry.aliyuncs.com/google_containers# ¨Ckubernetes-version string£º Ö¸¶¨kubenets°æ±¾ºÅ£¬Ä¬ÈÏÖµÊÇstable-1£¬»áµ¼Ö´Óhttps://dl.k8s.io/release/stable-1.txtÏÂÔØ×îеİ汾ºÅ£¬ÎÒÃÇ¿ÉÒÔ½«ÆäÖ¸¶¨ÎªÀο¿°æ±¾£¨v1.22.1£©À´Ìø¹ýÍøÂçÇëÇó¡£# ¨Capiserver-advertise-address Ö¸Ã÷Óà Master µÄÄĸö interface Óë Cluster µÄÆäËû½ÚµãͨѶ¡£ÈôÊÇ Master Óжà¸ö interface£¬½¨ÒéÃ÷È·Ö¸¶¨£¬ÈôÊDz»Ö¸¶¨£¬kubeadm »á×Ô¶¯Ñ¡ÔñÓÐĬÈÏÍø¹ØµÄ interface¡£ÕâÀïµÄipΪmaster½Úµãip£¬¼ÇµÃÌæ»»¡£# ¨Cpod-network-cidr Ö¸¶¨ Pod ÍøÂçµÄ¹æÄ£¡£Kubernetes Ö§³Ö¶àÖÖÍøÂç¼Æ»®£¬²¢ÇÒ²î±ðÍøÂç¼Æ»®¶Ô ¨Cpod-network-cidrÓÐ×Ô¼ºµÄÒªÇó£¬ÕâÀïÉèÖÃΪ10.244.0.0/16 ÊÇÓÉÓÚÎÒÃǽ«Ê¹Óà flannel ÍøÂç¼Æ»®£¬±ØÐèÉèÖóÉÕâ¸ö CIDR¡£# --control-plane-endpoint cluster-endpoint ÊÇÓ³Éäµ½¸Ã IP µÄ×Ô½ç˵ DNS Ãû³Æ£¬ÕâÀïÉèÖÃhostsÓ³É䣺192.168.0.113 cluster-endpoint¡£ Õ⽫ÔÊÐíÄ㽫 --control-plane-endpoint=cluster-endpoint ת´ï¸ø kubeadm init£¬²¢½«ÏàͬµÄ DNS Ãû³Æת´ï¸ø kubeadm join¡£ ÉÔºóÄã¿ÉÒÔÐÞ¸Ä cluster-endpoint ÒÔÖ¸Ïò¸ß¿ÉÓÃÐԼƻ®ÖеĸºÔØƽºâÆ÷µÄµØµã¡£
µÇ¼ºó¸´ÖÆ
ÉèÖÃÇéÐαäÁ¿
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config# ÔÝʱÉúЧ£¨Í˳öÄ¿½ñ´°¿ÚÖØÁ¬ÇéÐαäÁ¿Ê§Ð§£©export KUBECONFIG=/etc/kubernetes/admin.conf# ÓÀÊÀÉúЧ£¨ÍƼö£©echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profilesource ~/.bash_profile
µÇ¼ºó¸´ÖÆ
·¢Ã÷½ÚµãÕÕ¾ÉÓÐÎÊÌ⣬Éó²éÈÕÖ¾ /var/log/messages
“Container runtime network not ready” networkReady=”NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
½ÓÏÂÀ´¾ÍÊÇ×°Öà Pod ÍøÂç²å¼þ
8£©×°Öà Pod ÍøÂç²å¼þ£¨CNI£ºContainer Network Interface£©(master)
Äã±ØÐè°²ÅÅÒ»¸ö»ùÓÚ Pod ÍøÂç²å¼þµÄ ÈÝÆ÷ÍøÂç½Ó¿Ú (CNI)£¬ÒÔ±ãÄãµÄ Pod ¿ÉÒÔÏ໥ͨѶ¡£
# ×îºÃÌáÇ°ÏÂÔؾµÏñ£¨ËùÓнڵ㣩docker pull quay.io/coreos/flannel:v0.14.0kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
µÇ¼ºó¸´ÖÆ
ÈôÊÇÉÏÃæ×°ÖÃʧ°Ü£¬ÔòÏÂÔØÎÒ°Ù¶ÈÀïµÄ£¬ÀëÏß×°ÖÃ
Á´½Ó£ºhttps://pan.m.zonelele.com/s/1HB9xuO3bssAW7v5HzpXkeQ
ÌáÈ¡Â룺8888
ÔÙÉó²é node ½Úµã£¬¾ÍÒѾÕý³£ÁË
9£©node ½Úµã¼ÓÈë k8s ¼¯Èº
ÏÈ×°Öà kubelet
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes# ÉèÖÃΪ¿ª»ú×ÔÆô²¢ÏÖÔÚÁ¬Ã¦Æô¶¯Ð§ÀÍ --now£ºÁ¬Ã¦Æô¶¯Ð§ÀÍsystemctl enable --now kubeletsystemctl status kubelet
µÇ¼ºó¸´ÖÆ
ÈôÊÇûÓÐÁîÅÆ£¬¿ÉÒÔͨ¹ýÔÚ¿ØÖÆƽÃæ½ÚµãÉÏÔËÐÐÒÔÏÂÏÂÁîÀ´»ñÈ¡ÁîÅÆ£º
kubeadm token list
µÇ¼ºó¸´ÖÆ
ĬÈÏÇéÐÎÏ£¬ÁîÅÆ»áÔÚ24СʱºóÓâÆÚ¡£ÈôÊÇÒªÔÚÄ¿½ñÁîÅÆÓâÆں󽫽ڵã¼ÓÈ뼯Ⱥ£¬ Ôò¿ÉÒÔͨ¹ýÔÚ¿ØÖÆƽÃæ½ÚµãÉÏÔËÐÐÒÔÏÂÏÂÁîÀ´½¨ÉèÐÂÁîÅÆ£º
kubeadm token create# ÔÙÉó²ékubeadm token list
µÇ¼ºó¸´ÖÆ
ÈôÊÇÄãûÓÐ ¨Cdiscovery-token-ca-cert-hash µÄÖµ£¬Ôò¿ÉÒÔͨ¹ýÔÚ¿ØÖÆƽÃæ½ÚµãÉÏÖ´ÐÐÒÔÏÂÏÂÁîÁ´À´»ñÈ¡Ëü£º
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
µÇ¼ºó¸´ÖÆ
ÈôÊÇÖ´ÐÐ kubeadm init ʱûÓмͼϼÓÈ뼯ȺµÄÏÂÁ¿ÉÒÔͨ¹ýÒÔÏÂÏÂÁîÖØн¨É裨ÍƼö£©Ò»Ñùƽ³£²»±ØÉÏÃæµÄ»®·Ö»ñÈ¡ token ºÍ ca-cert-hash ·½·¨£¬Ö´ÐÐÒÔÏÂÏÂÁîÒ»ÆøºÇ³É£º
kubeadm token create --print-join-command
µÇ¼ºó¸´ÖÆ
ÕâÀïÐèÒªÆÚ´ýÒ»¶Îʱ¼ä£¬ÔÙÉó²é½Úµã½Úµã״̬£¬ÓÉÓÚÐèҪװÖà kube-proxy ºÍ flannel¡£ÁíÍ⣬ËÑË÷ÃñÖÚºÅÊÖÒÕÉçÇøºǫ́»Ø¸´¡°Linux¡±£¬»ñÈ¡Ò»·Ý¾ªÏ²Àñ°ü¡£
kubectl get pods -Akubectl get nodes
µÇ¼ºó¸´ÖÆ
10£©ÉèÖà IPVS
¡¾ÎÊÌâ¡¿¼¯ÈºÄÚÎÞ·¨ ping ͨ ClusterIP£¨»ò ServiceName£©
1¡¢¼ÓÔØ ip_vs Ïà¹ØÄÚºËÄ£¿é
modprobe -- ip_vsmodprobe -- ip_vs_shmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrr
µÇ¼ºó¸´ÖÆ
ËùÓнڵãÑéÖ¤¿ªÆôÁË ipvs£º
lsmod |grep ip_vs
µÇ¼ºó¸´ÖÆ
2¡¢×°Öà ipvsadm ¹¤¾ß
yum install ipset ipvsadm -y
µÇ¼ºó¸´ÖÆ
3¡¢±à¼ kube-proxy ÉèÖÃÎļþ£¬mode ÐÞ¸Ä³É ipvs
kubectl edit configmap -n kube-system kube-proxy
µÇ¼ºó¸´ÖÆ
4¡¢ÖØÆô kube-proxy
# ÏÈÉó²ékubectl get pod -n kube-system | grep kube-proxy# ÔÙdeleteÈÃËü×ÔÀÆðkubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'# ÔÙÉó²ékubectl get pod -n kube-system | grep kube-proxy
µÇ¼ºó¸´ÖÆ
5¡¢Éó²é ipvs ת·¢¹æÔò
ipvsadm -Ln
µÇ¼ºó¸´ÖÆ
11£©¼¯Èº¸ß¿ÉÓÃÉèÖÃ
ÉèÖø߿ÉÓã¨HA£©Kubernetes ¼¯ÈºÊµÏÖµÄÁ½Öּƻ®£º
ʹÓöѵþ£¨stacked£©¿ØÖÆƽÃæ½Úµã£¬ÆäÖÐ etcd ½ÚµãÓë¿ØÖÆƽÃæ½Úµã¹²´æ£¨±¾ÕÂʹÓã©£¬¼Ü¹¹Í¼ÈçÏ£º
ʹÓÃÍⲿ etcd ½Úµã£¬ÆäÖÐ etcd ÔÚÓë¿ØÖÆƽÃæ²î±ðµÄ½ÚµãÉÏÔËÐУ¬¼Ü¹¹Í¼ÈçÏ£º
ÕâÀïÐÂÔöһ̨»úе×÷ΪÁíÍâÒ»¸ö master ½Úµã£º192.168.0.116 ÉèÖøúÉÏÃæ master ½ÚµãÒ»Ñù¡£Ö»ÊDz»ÐèÒª×îºóÒ»²½³õʼ»¯ÁË¡£
1¡¢ÐÞ¸ÄÖ÷»úÃûºÍÉèÖà hosts
ËùÓнڵ㶼ͳһÈçÏÂÉèÖãº
# ÔÚ192.168.0.113Ö´ÐÐhostnamectl set-hostname k8s-master-168-0-113# ÔÚ192.168.0.114Ö´ÐÐhostnamectl set-hostname k8s-node1-168-0-114# ÔÚ192.168.0.115Ö´ÐÐhostnamectl set-hostname k8s-node2-168-0-115# ÔÚ192.168.0.116Ö´ÐÐhostnamectl set-hostname k8s-master2-168-0-116
µÇ¼ºó¸´ÖÆ
ÉèÖÃ hosts
cat >> /etc/hosts<<EOF192.168.0.113 k8s-master-168-0-113 cluster-endpoint192.168.0.114 k8s-node1-168-0-114192.168.0.115 k8s-node2-168-0-115192.168.0.116 k8s-master2-168-0-116EOF
µÇ¼ºó¸´ÖÆ
2¡¢ÉèÖà ssh »¥ÐÅ
# Ö±½ÓÒ»Ö±»Ø³µ¾ÍÐÐssh-keygenssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master2-168-0-116
µÇ¼ºó¸´ÖÆ
3¡¢Ê±¼äͬ²½
yum install chrony -ysystemctl start chronydsystemctl enable chronydchronyc sources
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
7¡¢¹Ø±Õ·À»ðǽ
systemctl stop firewalldsystemctl disable firewalld
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
4¡¢¹Ø±Õ swap
# ÔÝʱ¹Ø±Õ£»¹Ø±ÕswapÖ÷ÒªÊÇΪÁËÐÔÄÜ˼Á¿swapoff -a# ¿ÉÒÔͨ¹ýÕâ¸öÏÂÁîÉó²éswapÊÇ·ñ¹Ø±ÕÁËfree# ÓÀÊÀ¹Ø±Õsed -ri 's/.*swap.*/#&/' /etc/fstab
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
5¡¢½ûÓÃ SELinux
# ÔÝʱ¹Ø±Õsetenforce 0# ÓÀÊÀ½ûÓÃsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
6¡¢ÔÊÐí iptables ¼ì²éÇŽÓÁ÷Á¿£¨¿ÉÑ¡£¬ËùÓнڵ㣩
ÈôÒªÏÔʽ¼ÓÔØ´ËÄ£¿é£¬ÇëÔËÐÐ sudo modprobe br_netfilter£¬Í¨¹ýÔËÐÐ lsmod | grep br_netfilter À´ÑéÖ¤ br_netfilter Ä£¿éÊÇ·ñÒѼÓÔØ£¬
sudo modprobe br_netfilterlsmod | grep br_netfilter
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
ΪÁËÈà Linux ½ÚµãµÄ iptables Äܹ»×¼È·Éó²éÇŽÓÁ÷Á¿£¬ÇëÈ·ÈÏ sysctl ÉèÖÃÖÐµÄ net.bridge.bridge-nf-call-iptables ÉèÖÃΪ 1¡£ÀýÈ磺
cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOFsudo modprobe overlaysudo modprobe br_netfilter# ÉèÖÃËùÐèµÄ sysctl ²ÎÊý£¬²ÎÊýÔÚÖØÐÂÆô¶¯ºó¼á³ÖÎȹÌcat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# Ó¦Óà sysctl ²ÎÊý¶ø²»ÖØÐÂÆô¶¯sudo sysctl --system
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
7¡¢×°ÖÃÈÝÆ÷ docker£¨ËùÓнڵ㣩
ÌáÐÑ£ºv1.24 ֮ǰµÄ Kubernetes °æ±¾°üÀ¨Óë Docker Engine µÄÖ±½Ó¼¯³É£¬Ê¹ÓÃÃûΪ dockershim µÄ×é¼þ¡£ÕâÖÖÌØÊâµÄÖ±½ÓÕûºÏ²»ÔÙÊÇ Kubernetes µÄÒ»²¿·Ö £¨Õâ´Îɾ³ý±»×÷Ϊ v1.20 ¿¯Ðа汾µÄÒ»²¿·ÖÐû²¼£©¡£Äã¿ÉÒÔÔĶÁ¼ì²é Dockershim ÆúÓÃÊÇ·ñ»áÓ°ÏìÄã ÒÔÏàʶ´Ëɾ³ý¿ÉÄÜ»áÔõÑùÓ°ÏìÄã¡£ÒªÏàʶÔõÑùʹÓà dockershim ¾ÙÐÐǨá㣬Çë²ÎÔÄ´Ó dockershim Ǩáã¡£
# ÉèÖÃyumÔ´cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/# centos7wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo# centos8wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo# ×°ÖÃyum-config-managerÉèÖù¤¾ßyum -y install yum-utils# ÉèÖÃyumÔ´yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# ×°ÖÃdocker-ce°æ±¾yum install -y docker-ce# Æô¶¯systemctl start docker# ¿ª»ú×ÔÆôsystemctl enable docker# Éó²é°æ±¾ºÅdocker --version# Éó²é°æ±¾ÏêϸÐÅÏ¢docker version# Docker¾µÏñÔ´ÉèÖÃ# ÐÞ¸ÄÎļþ /etc/docker/daemon.json£¬Ã»ÓÐÕâ¸öÎļþ¾Í½¨Éè# Ìí¼ÓÒÔÏÂÄÚÈݺó£¬ÖØÆôdockerЧÀÍ£ºcat >/etc/docker/daemon.json<<EOF{ "registry-mirrors": ["http://hub-mirror.c.163.com"]}EOF# ¼ÓÔØsystemctl reload docker# Éó²ésystemctl status docker containerd
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿dockerd ÏÖʵÕæʵŲÓõÄÕÕ¾É containerd µÄ api ½Ó¿Ú£¬containerd ÊÇ dockerd ºÍ runC Ö®¼äµÄÒ»ÆäÖÐÐĽ»Á÷×é¼þ¡£ÒÔÊÇÆô¶¯ docker ЧÀ͵Äʱ¼ä£¬Ò²»áÆô¶¯ containerd ЧÀ͵ġ£
8¡¢ÉèÖà k8s yum Ô´£¨ËùÓнڵ㣩
cat > /etc/yum.repos.d/kubernetes.repo << EOF[k8s]name=k8senabled=1gpgcheck=0baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/EOF
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
9¡¢½« sandbox_image ¾µÏñÔ´ÉèÖÃΪ°¢ÀïÔÆ google_containers ¾µÏñÔ´£¨ËùÓнڵ㣩
# µ¼³öĬÈÏÉèÖã¬config.tomlÕâ¸öÎļþĬÈÏÊDz»±£´æµÄcontainerd config default > /etc/containerd/config.tomlgrep sandbox_image /etc/containerd/config.tomlsed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.tomlgrep sandbox_image /etc/containerd/config.toml
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
10¡¢ÉèÖà containerd cgroup Çý¶¯³ÌÐò systemd
kubernets ×Ô£ö 1.24.0 ºó£¬¾Í²»ÔÙʹÓà docker.shim£¬Ìæ»»½ÓÄÉ containerd ×÷ΪÈÝÆ÷ÔËÐÐʱ¶Ëµã¡£Òò´ËÐèҪװÖà containerd£¨ÔÚ docker µÄ»ù´¡ÏÂ×°Öã©£¬ÉÏÃæ×°Öà docker µÄʱ¼ä¾Í×Ô¶¯×°ÖÃÁË containerd ÁË¡£ÕâÀïµÄ docker Ö»ÊÇ×÷Ϊ¿Í»§¶Ë°ÕÁË¡£ÈÝÆ÷ÒýÇæÕÕ¾É containerd¡£
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml# Ó¦ÓÃËùÓиü¸Äºó,ÖØÐÂÆô¶¯containerdsystemctl restart containerd
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
11¡¢×îÏÈ×°Öà kubeadm£¬kubelet ºÍ kubectl£¨master ½Úµã£©
# ²»Ö¸¶¨°æ±¾¾ÍÊÇ×îа汾£¬Ä¿½ñ×îаæ¾ÍÊÇ1.24.1yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1 --disableexcludes=kubernetes# disableexcludes=kubernetes£º½ûµô³ýÁËÕâ¸ökubernetesÖ®ÍâµÄ±ðµÄ¿ÍÕ»# ÉèÖÃΪ¿ª»ú×ÔÆô²¢ÏÖÔÚÁ¬Ã¦Æô¶¯Ð§ÀÍ --now£ºÁ¬Ã¦Æô¶¯Ð§ÀÍsystemctl enable --now kubelet# Éó²é״̬£¬ÕâÀïÐèÒªÆÚ´ýÒ»¶Îʱ¼äÔÙÉó²éЧÀÍ״̬£¬Æô¶¯»áÓеãÂýsystemctl status kubelet# Éó²é°æ±¾kubectl versionyum info kubeadm
µÇ¼ºó¸´ÖÆ
12¡¢¼ÓÈë k8s ¼¯Èº
# Ö¤ÈôÊÇÓâÆÚÁË£¬¿ÉÒÔʹÓÃÏÂÃæÏÂÁîÌìÉúÐÂÖ¤ÊéÉÏ´«£¬ÕâÀï»á´òÓ¡³öcertificate key£¬ºóÃæ»áÓõ½kubeadm init phase upload-certs --upload-certs# Ä㻹¿ÉÒÔÔÚ ¡¾init¡¿Ê±´úÖ¸¶¨×Ô½ç˵µÄ --certificate-key£¬ÒÔºó¿ÉÒÔÓÉ join ʹÓᣠҪÌìÉúÕâÑùµÄÃÜÔ¿£¬¿ÉÒÔʹÓÃÒÔÏÂÏÂÁÕâÀï²»Ö´ÐУ¬¾ÍÓÃÉÏÃæËÈË×ÔÏÂÁî¾Í¿ÉÒÔÁË£©£ºkubeadm certs certificate-keykubeadm token create --print-join-commandkubeadm join cluster-endpoint:6443 --token wswrfw.fc81au4yvy6ovmhh --discovery-token-ca-cert-hash sha256:43a3924c25104d4393462105639f6a02b8ce284728775ef9f9c30eed8e0abc0f --control-plane --certificate-key 8d2709697403b74e35d05a420bd2c19fd8c11914eb45f2ff22937b245bed5b68# --control-plane ±ê¼Ç֪ͨ kubeadm join ½¨ÉèÒ»¸öеĿØÖÆƽÃæ¡£¼ÓÈëmaster±ØÐè¼ÓÕâ¸ö±ê¼Ç# --certificate-key ... ½«µ¼Ö´Ӽ¯ÈºÖÐµÄ kubeadm-certs Secret ÏÂÔØ¿ØÖÆƽÃæÖ¤Ê鲢ʹÓøø¶¨µÄÃÜÔ¿¾ÙÐнâÃÜ¡£ÕâÀïµÄÖµ¾ÍÊÇÉÏÃæÕâ¸öÏÂÁkubeadm init phase upload-certs --upload-certs£©´òÓ¡³öµÄkey¡£
µÇ¼ºó¸´ÖÆ
ƾ֤ÌáÐÑÖ´ÐÐÈçÏÂÏÂÁ
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
µÇ¼ºó¸´ÖÆ
Éó²é
kubectl get nodeskubectl get pods -A -owide
µÇ¼ºó¸´ÖÆ
ËäÈ»ÏÖÔÚÒѾÓÐÁ½¸ö master ÁË£¬¿ÉÊǶÔÍâÕÕ¾ÉÖ»ÄÜÓÐÒ»¸öÈë¿ÚµÄ£¬ÒÔÊÇ»¹µÃÒªÒ»¸ö¸ºÔØƽºâÆ÷£¬ÈôÊÇÒ»¸ö master ¹ÒÁË£¬»á×Ô¶¯Çе½ÁíÍâÒ»¸ö master ½Úµã¡£
12£©°²ÅÅ Nginx+Keepalived ¸ß¿ÉÓøºÔØƽºâÆ÷
1¡¢×°Öà Nginx ºÍ Keepalived
# ÔÚÁ½¸ömaster½ÚµãÉÏÖ´ÐÐyum install nginx keepalived -y
µÇ¼ºó¸´ÖÆ
2¡¢Nginx ÉèÖÃ
ÔÚÁ½¸ö master ½ÚµãÉèÖÃ
cat > /etc/nginx/nginx.conf << "EOF"user nginx;worker_processes auto;error_log /var/log/nginx/error.log;pid /run/nginx.pid;include /usr/share/nginx/modules/*.conf;events { worker_connections 1024;}# ËIJ㸺ÔØƽºâ£¬ÎªÁ½Ì¨Master apiserver×é¼þÌṩ¸ºÔØƽºâstream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { # Master APISERVER IP:PORT server 192.168.0.113:6443; # Master2 APISERVER IP:PORT server 192.168.0.116:6443; } server { listen 16443; proxy_pass k8s-apiserver; }}http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80 default_server; server_name _; location / { } }}EOF
µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿ÈôÊÇÖ»°ü¹Ü¸ß¿ÉÓ㬲»ÉèÖà k8s-apiserver ¸ºÔØƽºâµÄ»°£¬¿ÉÒÔ²»×° nginx£¬¿ÉÊÇ×îºÃÕÕ¾ÉÉèÖÃһϠk8s-apiserver ¸ºÔØƽºâ¡£
3¡¢Keepalived ÉèÖã¨master£©
cat > /etc/keepalived/keepalived.conf << EOFglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from fage@qq.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER}vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 # VRRP ·ÓÉ IDʵÀý£¬Ã¿¸öʵÀýÊÇΨһµÄ priority 100 # ÓÅÏȼ¶£¬±¸Ð§ÀÍÆ÷ÉèÖà 90 advert_int 1 # Ö¸¶¨VRRP ÐÄÌø°üͨ¸æ¾àÀëʱ¼ä£¬Ä¬ÈÏ1Ãë authentication { auth_type PASS auth_pass 1111 } # ÐéÄâIP virtual_ipaddress { 192.168.0.120/24 } track_script { check_nginx }}EOF
µÇ¼ºó¸´ÖÆ
vrrp_script£ºÖ¸¶¨¼ì²é nginx ÊÂÇé״̬¾ç±¾£¨Æ¾Ö¤ nginx ״̬ÅжÏÊÇ·ñ¹ÊÕÏתÒÆ£©
virtual_ipaddress£ºÐéÄâ IP£¨VIP£©
¼ì²é nginx ״̬¾ç±¾£º
cat > /etc/keepalived/check_nginx.sh << "EOF"#!/bin/bashcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then exit 1else exit 0fiEOFchmod +x /etc/keepalived/check_nginx.sh
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
4¡¢Keepalived ÉèÖã¨backup£©
cat > /etc/keepalived/keepalived.conf << EOFglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from fage@qq.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP}vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 # VRRP ·ÓÉ IDʵÀý£¬Ã¿¸öʵÀýÊÇΨһµÄ priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.120/24 } track_script { check_nginx }}EOF
µÇ¼ºó¸´ÖÆ
¼ì²é nginx ״̬¾ç±¾£º
cat > /etc/keepalived/check_nginx.sh << "EOF"#!/bin/bashcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then exit 1else exit 0fiEOFchmod +x /etc/keepalived/check_nginx.sh
µÇ¼ºó¸´ÖÆ µÇ¼ºó¸´ÖÆ
5¡¢Æô¶¯²¢ÉèÖÿª»úÆô¶¯
systemctl daemon-reloadsystemctl restart nginx && systemctl enable nginx && systemctl status nginxsystemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
µÇ¼ºó¸´ÖÆ
Éó²é VIP
ip a
µÇ¼ºó¸´ÖÆ
6¡¢ÐÞ¸Ä hosts£¨ËùÓнڵ㣩
½« cluster-endpoint ֮ǰִÐÐµÄ ip ÐÞ¸ÄÖ´ÐÐÏÖÔÚµÄ VIP
192.168.0.113 k8s-master-168-0-113192.168.0.114 k8s-node1-168-0-114192.168.0.115 k8s-node2-168-0-115192.168.0.116 k8s-master2-168-0-116192.168.0.120 cluster-endpoint
µÇ¼ºó¸´ÖÆ
7¡¢²âÊÔÑéÖ¤
Éó²é°æ±¾£¨¸ºÔØƽºâ²âÊÔÑéÖ¤£©
curl -k https://cluster-endpoint:16443/version
µÇ¼ºó¸´ÖÆ
¸ß¿ÉÓòâÊÔÑéÖ¤£¬½« k8s-master-168-0-113 ½Úµã¹Ø»ú
shutdown -h nowcurl -k https://cluster-endpoint:16443/versionkubectl get nodes -Akubectl get pods -A
µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿¶Ñµþ¼¯Èº±£´æñîºÏʧ°ÜµÄΣº¦¡£ÈôÊÇÒ»¸ö½Úµã±¬·¢¹ÊÕÏ£¬Ôò etcd ³ÉÔ±ºÍ¿ØÖÆƽÃæʵÀý¶¼½«É¥Ê§£¬ ²¢ÇÒÈßÓà»áÊܵ½Ó°Ïì¡£Äã¿ÉÒÔͨ¹ýÌí¼Ó¸ü¶à¿ØÖÆƽÃæ½ÚµãÀ´½µµÍ´ËΣº¦¡£
Èý¡¢k8s ÖÎÀíƽ̨ dashboard ÇéÐΰ²ÅÅ
1£©dashboard °²ÅÅ
GitHub µØµã£ºhttps://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yamlkubectl get pods -n kubernetes-dashboard
µÇ¼ºó¸´ÖÆ
¿ÉÊÇÕâ¸öÖ»ÄÜÄÚ²¿»á¼û£¬ÒÔÊÇÒªÍⲿ»á¼û£¬ÒªÃ´°²ÅÅ ingress£¬ÒªÃ´¾ÍÊÇÉèÖà service NodePort ÀàÐÍ¡£ÕâÀïÑ¡Ôñ service ̻¶¶Ë¿Ú¡£ÁíÍ⣬ËÑË÷ÃñÖںűà³ÌÊÖÒÕȦºǫ́»Ø¸´¡°1024¡±£¬»ñÈ¡Ò»·Ý¾ªÏ²Àñ°ü¡£
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
µÇ¼ºó¸´ÖÆ
Ð޸ĺóµÄÄÚÈÝÈçÏ£º
# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.apiVersion: v1kind: Namespacemetadata: name: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard---kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 31443 selector: k8s-app: kubernetes-dashboard---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboardtype: Opaque---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboardtype: Opaquedata: csrf: ""---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboardtype: Opaque---kind: ConfigMapapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardrules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"]---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboardrules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: securityContext: seccompProfile: type: RuntimeDefault containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.6.0 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule---kind: ServiceapiVersion: v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboardspec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboardspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper spec: securityContext: seccompProfile: type: RuntimeDefault containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.8 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
µÇ¼ºó¸´ÖÆ
ÖØа²ÅÅ
kubectl delete -f recommended.yamlkubectl apply -f recommended.yamlkubectl get svc,pods -n kubernetes-dashboard
µÇ¼ºó¸´ÖÆ
2£©½¨ÉèµÇÈÎÃü»§
cat >ServiceAccount.yaml<<EOFapiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboardEOFkubectl apply -f ServiceAccount.yaml
µÇ¼ºó¸´ÖÆ
½¨Éè²¢»ñÈ¡µÇ¼ token
kubectl -n kubernetes-dashboard create token admin-user
µÇ¼ºó¸´ÖÆ
3£©ÉèÖà hosts µÇ¼ dashboard web
192.168.0.120 cluster-endpoint
µÇ¼ºó¸´ÖÆ
µÇ¼£ºhttps://cluster-endpoint:31443
ÊäÈëÉÏÃ潨ÉèµÄ token µÇ¼
ËÄ¡¢k8s ¾µÏñ¿ÍÕ» harbor ÇéÐΰ²ÅÅ
GitHub µØµã£ºhttps://github.com/helm/helm/releases
ÕâʹÓà helm ×°Öã¬ÒÔÊǵÃÏÈ×°Öà helm
1£©×°Öà helm
mkdir -p /opt/k8s/helm && cd /opt/k8s/helmwget https://get.helm.sh/helm-v3.9.0-rc.1-linux-amd64.tar.gztar -xf helm-v3.9.0-rc.1-linux-amd64.tar.gzln -s /opt/k8s/helm/linux-amd64/helm /usr/bin/helmhelm versionhelm help
µÇ¼ºó¸´ÖÆ
2£©ÉèÖà hosts
192.168.0.120 myharbor.com
µÇ¼ºó¸´ÖÆ
3£©½¨Éè stl Ö¤Êé
mkdir /opt/k8s/helm/stl && cd /opt/k8s/helm/stl# ÌìÉú CA Ö¤Êé˽Կopenssl genrsa -out ca.key 4096# ÌìÉú CA Ö¤Êéopenssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" -key ca.key -out ca.crt# ½¨ÉèÓòÃûÖ¤Ê飬ÌìÉú˽Կopenssl genrsa -out myharbor.com.key 4096# ÌìÉúÖ¤ÊéÊðÃûÇëÇó CSRopenssl req -sha512 -new -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" -key myharbor.com.key -out myharbor.com.csr# ÌìÉú x509 v3 À©Õ¹cat > v3.ext <<-EOFauthorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentextendedKeyUsage = serverAuthsubjectAltName = @alt_names[alt_names]DNS.1=myharbor.comDNS.2=*.myharbor.comDNS.3=hostnameEOF#½¨Éè Harbor »á¼ûÖ¤Êéopenssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in myharbor.com.csr -out myharbor.com.crt
µÇ¼ºó¸´ÖÆ
4£©×°Öà ingress
ingress ¹Ù·½ÍøÕ¾£ºhttps://kubernetes.github.io/ingress-nginx/
ingress ¿ÍÕ»µØµã£ºhttps://github.com/kubernetes/ingress-nginx
°²ÅÅÎĵµ£ºhttps://kubernetes.github.io/ingress-nginx/deploy/
1¡¢Í¨¹ý helm °²ÅÅ
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
µÇ¼ºó¸´ÖÆ
2¡¢Í¨¹ý YAML Îļþ×°Ö㨱¾ÕÂʹÓÃÕâ¸ö·½·¨×°Öà ingress£©
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
µÇ¼ºó¸´ÖÆ
ÈôÊÇÏÂÔؾµÏñʧ°Ü£¬¿ÉÒÔÓÃÒÔÏ·½·¨Ð޸ľµÏñµØµãÔÙ×°ÖÃ
Å£±Æ°¡£¡½Ó˽»î±Ø±¸µÄ N ¸ö¿ªÔ´ÏîÄ¿£¡¸ÏæÕä²Ø
µÇ¼ºó¸´ÖÆ
# ¿ÉÒÔÏȰѾµÏñÏÂÔØ£¬ÔÙ×°Öà docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml # Ð޸ľµÏñµØµã sed -i 's@k8s.gcr.io/ingress-nginx/controller:v1.2.0(.*)@registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0@' deploy.yaml sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1(.*)$@registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@' deploy.yaml ###»¹ÐèÒªÐÞ¸ÄÁ½µØ·½ #1¡¢kind: ÀàÐÍÐ޸ijÉDaemonSet£¬replicas: ×¢Ïúµô£¬ÓÉÓÚDaemonSetģʽ»áÿ¸ö½ÚµãÔËÐÐÒ»¸öpod #2¡¢ÔÚÌí¼ÓÒ»Ìõ£ºhostnetwork£ºtrue #3¡¢°ÑLoadBalancerÐ޸ijÉNodePort #4¡¢ÔÚ--validating-webhook-keyÏÂÃæÌí¼Ó- --watch-ingress-without-class=true #5¡¢ÉèÖÃmaster½Úµã¿Éµ÷Àí kubectl taint nodes k8s-master-168-0-113 node-role.kubernetes.io/control-plane:NoSchedule- kubectl taint nodes k8s-master2-168-0-116 node-role.kubernetes.io/control-plane:NoSchedule- kubectl apply -f deploy.yaml
µÇ¼ºó¸´ÖÆ
5£©×°Öà nfs
1¡¢ËùÓнڵã×°Öà nfs
yum -y install nfs-utils rpcbind
µÇ¼ºó¸´ÖÆ
2¡¢ÔÚ master ½Úµã½¨Éè¹²ÏíĿ¼²¢ÊÚȨ
mkdir /opt/nfsdata # ÊÚȨ¹²ÏíĿ¼ chmod 666 /opt/nfsdata
µÇ¼ºó¸´ÖÆ
3¡¢ÉèÖà exports Îļþ
cat > /etc/exports<<EOF /opt/nfsdata *(rw,no_root_squash,no_all_squash,sync) EOF # ÉèÖÃÉúЧ exportfs -r
µÇ¼ºó¸´ÖÆ
exportfs ÏÂÁî
³£ÓÃÑ¡Ïî
-a ËùÓйÒÔØ»òÕßËùÓÐжÔØ
-r ÖØйÒÔØ
-u жÔØijһ¸öĿ¼
-v ÏÔʾ¹²ÏíĿ¼ ÒÔϲÙ×÷ÔÚЧÀͶËÉÏ
4¡¢Æô¶¯ rpc ºÍ nfs£¨¿Í»§¶ËÖ»ÐèÒªÆô¶¯ rpc ЧÀÍ£©£¨×¢ÖØ˳Ðò£©
systemctl start rpcbind systemctl start nfs-server systemctl enable rpcbind systemctl enable nfs-server
µÇ¼ºó¸´ÖÆ
Éó²é
showmount -e # VIP showmount -e 192.168.0.120
µÇ¼ºó¸´ÖÆ
-e ÏÔʾ NFS ЧÀÍÆ÷µÄ¹²ÏíÁбí
-a ÏÔʾ±¾»ú¹ÒÔصÄÎļþ×ÊÔ´µÄÇéÐÎ NFS ×ÊÔ´µÄÇéÐÎ
-v ÏÔʾ°æ±¾ºÅ
5¡¢¿Í»§¶Ë
# ×°Öà yum -y install nfs-utils rpcbind # Æô¶¯rpcЧÀÍ systemctl start rpcbind systemctl enable rpcbind # ½¨Éè¹ÒÔØĿ¼ mkdir /mnt/nfsdata # ¹ÒÔØ echo "192.168.0.120:/opt/nfsdata /mnt/nfsdata nfs defaults 0 1">> /etc/fstab mount -a
µÇ¼ºó¸´ÖÆ
6¡¢rsync Êý¾Ýͬ²½
¡¾1¡¿rsync ×°ÖÃ
# Á½Í·¶¼µÃ×°Öà yum -y install rsync
µÇ¼ºó¸´ÖÆ
¡¾2¡¿ÉèÖÃ
ÔÚ/etc/rsyncd.conf ÖÐÌí¼Ó
cat >/etc/rsyncd.conf<<EOF uid = root gid = root #½ûïÀÔÚԴĿ¼ use chroot = yes #¼àÌýµØµã address = 192.168.0.113 #¼àÌýµØµãtcp/udp 873£¬¿Éͨ¹ýcat /etc/services | grep rsyncÉó²é port 873 #ÈÕÖ¾ÎļþλÖà log file = /var/log/rsyncd.log #´æ·ÅÀú³Ì ID µÄÎļþλÖà pid file = /var/run/rsyncd.pid #ÔÊÐí»á¼ûµÄ¿Í»§»úµØµã hosts allow = 192.168.0.0/16 #¹²ÏíÄ£¿éÃû³Æ [nfsdata] #ԴĿ¼µÄÏÖʵ·¾¶ path = /opt/nfsdata comment = Document Root of www.kgc.com #Ö¸¶¨¿Í»§¶ËÊÇ·ñ¿ÉÒÔÉÏ´«Îļþ£¬Ä¬È϶ÔËùÓÐÄ£¿éΪ true read only = yes #ͬ²½Ê±²»ÔÙѹËõµÄÎļþÀàÐÍ dont compress = *.gz *.bz2 *.tgz *.zip *.rar *.z #ÊÚȨÕË»§£¬¶à¸öÕ˺ÅÒÔ¿Õ¸ñÍÑÀ룬²»¼ÓÔòΪÄäÃû£¬²»ÒÀÀµÏµÍ³Õ˺Šauth users = backuper #´æ·ÅÕË»§ÐÅÏ¢µÄÊý¾ÝÎļþ secrets file = /etc/rsyncd_users.db EOF
µÇ¼ºó¸´ÖÆ
ÉèÖÃ rsyncd_users.db
cat >/etc/rsyncd_users.db<<EOF backuper:123456 EOF #¹Ù·½ÒªÇó£¬×îºÃÖ»ÊǸ³È¨600£¡ chmod 600 /etc/rsyncd_users.db
µÇ¼ºó¸´ÖÆ
¡¾3¡¿rsyncd.conf ³£ÓòÎÊýÏê½â
rsyncd.conf ²ÎÊý
rsyncd.conf ²ÎÊý | ²ÎÊý˵Ã÷ |
---|---|
uid=root | rsync ʹÓõÄÓû§¡£ |
gid=root | rsync ʹÓõÄÓû§×飨Óû§ËùÔÚµÄ×飩 |
use chroot=no | ÈôÊÇΪ true£¬daemon »áÔÚ¿Í»§¶Ë´«ÊäÎļþÇ°¡°chroot to the path¡±¡£ÕâÊÇÒ»ÖÖÇå¾²ÉèÖã¬ÓÉÓÚÎÒÃÇ´ó´ó¶¼¶¼ÔÚÄÚÍø£¬ÒÔÊDz»ÅäҲû¹Øϵ |
max connections=200 | ÉèÖÃ×î´óÅþÁ¬Êý£¬Ä¬ÈÏ 0£¬Òâ˼ÎÞÏÞÖÆ£¬¸ºÖµÎª¹Ø±ÕÕâ¸öÄ£¿é |
timeout=400 | ĬÒÔΪ 0£¬ÌåÏÖ no timeout£¬½¨Òé 300-600£¨5-10 ·ÖÖÓ£© |
pid file | rsync daemon Æô¶¯ºó½«ÆäÀú³Ì pid дÈë´ËÎļþ¡£ÈôÊÇÕâ¸öÎļþ±£´æ£¬rsync ²»»áÁýÕÖ¸ÃÎļþ£¬¶øÊÇ»áÖÕÖ¹ |
lock file | Ö¸¶¨ lock ÎļþÓÃÀ´Ö§³Ö¡°max connections¡±²ÎÊý£¬Ê¹µÃ×ÜÅþÁ¬Êý²»»áÁè¼ÝÏÞÖÆ |
log file | ²»Éè»òÕßÉèÖùýʧ£¬rsync »áʹÓà rsyslog Êä³öÏà¹ØÈÕÖ¾ÐÅÏ¢ |
ignore errors | ºöÂÔ I/O ¹ýʧ |
read only=false | Ö¸¶¨¿Í»§¶ËÊÇ·ñ¿ÉÒÔÉÏ´«Îļþ£¬Ä¬È϶ÔËùÓÐÄ£¿éΪ true |
list=false | ÊÇ·ñÔÊÐí¿Í»§¶Ë¿ÉÒÔÉó²é¿ÉÓÃÄ£¿éÁÐ±í£¬Ä¬ÒÔΪ¿ÉÒÔ |
hosts allow | Ö¸¶¨¿ÉÒÔÁªÏµµÄ¿Í»§¶ËÖ÷»úÃû»òºÍ ip µØµã»òµØµã¶Î£¬Ä¬ÈÏÇéÐÎûÓд˲ÎÊý£¬¼´¶¼¿ÉÒÔÅþÁ¬ |
hosts deny | Ö¸¶¨²»¿ÉÒÔÁªÏµµÄ¿Í»§¶ËÖ÷»úÃû»ò ip µØµã»òµØµã¶Î£¬Ä¬ÈÏÇéÐÎûÓд˲ÎÊý£¬¼´¶¼¿ÉÒÔÅþÁ¬ |
auth users | Ö¸¶¨ÒÔ¿Õ¸ñ»ò¶ººÅÍÑÀëµÄÓû§¿ÉÒÔʹÓÃÄÄЩģ¿é£¬Óû§²»ÐèÒªÔÚÍâµØϵͳÖб£´æ¡£Ä¬ÒÔΪËùÓÐÓû§ÎÞÃÜÂë»á¼û |
secrets file | Ö¸¶¨Óû§ÃûºÍÃÜÂë´æ·ÅµÄÎļþ£¬ÃûÌã»Óû§Ãû£»ÃÜÂ룬ÃÜÂë²»Áè¼Ý 8 λ |
[backup] | ÕâÀï¾ÍÊÇÄ£¿éÃû³Æ£¬ÐèÓÃÖÐÀ¨ºÅÀ©ÆðÀ´£¬ÆðÃû³ÆûÓÐÌØÊâÒªÇ󣬵«×îºÃÊÇÓÐÒâÒåµÄÃû³Æ£¬±ãÓÚÒÔºóά»¤ |
path | Õâ¸öÄ£¿éÖУ¬daemon ʹÓõÄÎļþϵͳ»òĿ¼£¬Ä¿Â¼µÄȨÏÞҪעÖغÍÉèÖÃÎļþÖеÄȨÏÞÒ»Ö£¬²»È»»áÓöµ½¶ÁдµÄÎÊÌâ |
¡¾4¡¿rsync ³£ÓÃÏÂÁî²ÎÊýÏê½â
rsync --help rsync [Ñ¡Ïî] ÔʼλÖà ĿµÄλÖà ³£ÓÃÑ¡Ïî ˵Ã÷ -r µÝ¹éģʽ£¬°üÀ¨Ä¿Â¼¼°×ÓĿ¼ÖеÄËùÓÐÎļþ -l ¹ØÓÚ·ûºÅÁ´½ÓÎļþÈÔÈ»¸´ÖÆΪ·ûºÅÁ´½ÓÎļþ -v ÏÔʾͬ²½Àú³ÌµÄÏêϸÐÅÏ¢ -z ÔÚ´«ÊäÎļþʱ¾ÙÐÐѹËõgoD -p ±£´æÎļþµÄȨÏÞ±ê¼Ç -a ¹éµµÄ£Ê½£¬µÝºÏ²¢±£´æ¹¤¾ßÊôÐÔ£¬µÈͬÓÚ-rlpt -t ±£´æÎļþµÄʱ¼ä±ê¼Ç -g ±£´æÎļþµÄÊô×é±ê¼Ç£¨½ö³¬µÈÓû§Ê¹Óã© -o ±£´æÎļþµÄÊôÖ÷±ê¼Ç£¨½ö³¬µÈÓû§Ê¹Óã© -H ±£´æÓ²Á´½ÓÎļþ -A ±£´æACLÊôÐÔÐÅÏ¢ -D ±£´æ×°±¸Îļþ¼°ÆäËûÌØÊâÎļþ --delete ɾ³ýÄ¿µÄλÖÃÓжøÔʼλÖÃûÓеÄÎļþ --checksum ƾ֤¹¤¾ßµÄУÑéºÍÀ´¾öÒéÊÇ·ñÌø¹ýÎļþ
µÇ¼ºó¸´ÖÆ
¡¾5¡¿Æô¶¯Ð§ÀÍ£¨Êý¾ÝÔ´»úе£©
#rsync¼àÌý¶Ë¿Ú£º873 #rsyncÔËÐÐģʽ£ºC/S rsync --daemon --config=/etc/rsyncd.conf netstat -tnlp|grep :873
µÇ¼ºó¸´ÖÆ
¡¾6¡¿Ö´ÐÐÏÂÁîͬ²½Êý¾Ý
# ÔÚÄ¿µÄ»úеÉÏÖ´ÐÐ # rsync -avz Óû§Ãû@Ô´Ö÷»úµØµã/ԴĿ¼ Ä¿µÄĿ¼ rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/
µÇ¼ºó¸´ÖÆ
¡¾7¡¿crontab ׼ʱͬ²½
# ÉèÖÃcrontab£¬ ÿÎå·ÖÖÓͬ²½Ò»´Î£¬ÕâÖÖ·½·¨Ç·ºÃ */5 * * * * rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/
µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿crontab ׼ʱͬ²½Êý¾Ý²»Ì«ºÃ£¬¿ÉÒÔʹÓÃrsync+inotify×öÊý¾Ýʵʱͬ²½£¬ÕâÀïƪ·ùÓе㳤ÁË£¬ÏȲ»½²£¬ÈôÊǺóÃæÓÐʱ¼ä»á³öһƪµ¥¶ÀÎÄÕÂÀ´½²¡£
6£©½¨Éè nfs provisioner ºÍ³¤ÆÚ»¯´æ´¢ SC
¡¾ÎÂÜ°ÌáÐÑ¡¿ÕâÀï¸úÎÒ֮ǰµÄÎÄÕÂÓеã²î±ð£¬Ö®Ç°µÄ·½·¨Ò²²»ÊÊÓÃа汾¡£
GitHub µØµã£ºhttps://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
helm °²ÅÅ nfs-subdir-external-provisioner
1¡¢Ìí¼Ó helm ¿ÍÕ»
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
µÇ¼ºó¸´ÖÆ
2¡¢helm ×°ÖÃ nfs provisioner
¡¾ÎÂÜ°ÌáÐÑ¡¿Ä¬ÈϾµÏñÊÇÎÞ·¨»á¼ûµÄ£¬ÕâÀïʹÓà dockerhub ËÑË÷µ½µÄ¾µÏñwilldockerhub/nfs-subdir-external-provisioner:v4.0.2£¬ÉÐÓоÍÊÇ StorageClass ²»·ÖÃüÃû¿Õ¼ä£¬ËùÓÐÔÚËùÓÐÃüÃû¿Õ¼ä϶¼¿ÉÒÔʹÓá£
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --namespace=nfs-provisioner --create-namespace --set image.repository=willdockerhub/nfs-subdir-external-provisioner --set image.tag=v4.0.2 --set replicaCount=2 --set storageClass.name=nfs-client --set storageClass.defaultClass=true --set nfs.server=192.168.0.120 --set nfs.path=/opt/nfsdata
µÇ¼ºó¸´ÖÆ
¡¾ÎÂÜ°ÌáÐÑ¡¿ÉÏÃæ nfs.server ÉèÖÃΪ VIP£¬¿ÉʵÏָ߿ÉÓá£
3¡¢Éó²é
kubectl get pods,deploy,sc -n nfs-provisioner
µÇ¼ºó¸´ÖÆ
7£©°²ÅÅ Harbor£¨Https ·½·¨£©
1¡¢½¨Éè Namespace
kubectl create ns harbor
µÇ¼ºó¸´ÖÆ
2¡¢½¨ÉèÖ¤ÊéÃØÔ¿
kubectl create secret tls myharbor.com --key myharbor.com.key --cert myharbor.com.crt -n harbor kubectl get secret myharbor.com -n harbor
µÇ¼ºó¸´ÖÆ
3¡¢Ìí¼Ó Chart ¿â
helm repo add harbor https://helm.goharbor.io
µÇ¼ºó¸´ÖÆ
4¡¢Í¨¹ý helm ×°Öà harbor
helm install myharbor --namespace harbor harbor/harbor --set expose.ingress.hosts.core=myharbor.com --set expose.ingress.hosts.notary=notary.myharbor.com --set-string expose.ingress.annotations.'nginx.org/client-max-body-size'="1024m" --set expose.tls.secretName=myharbor.com --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client --set persistence.persistentVolumeClaim.database.storageClass=nfs-client --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client --set persistence.enabled=true --set externalURL=https://myharbor.com --set harborAdminPassword=Harbor12345
µÇ¼ºó¸´ÖÆ
ÕâÀïÉÔµÈÒ»¶Îʱ¼äÔÚÉó²é×ÊԴ״̬
kubectl get ingress,svc,pods,pvc -n harbor
µÇ¼ºó¸´ÖÆ
5¡¢ingress ûÓÐ ADDRESS ÎÊÌâ½â¾ö
¡¾ÆÊÎö¡¿£¬·¢Ã÷”error: endpoints ¡°default-http-backend¡± not found”
cat << EOF > default-http-backend.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend namespace: harbor spec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4 # image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: harbor labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend EOF kubectl apply -f default-http-backend.yaml
µÇ¼ºó¸´ÖÆ
6¡¢Ð¶ÔØÖØа²ÅÅ
# жÔØ helm uninstall myharbor -n harbor kubectl get pvc -n harbor| awk 'NR!=1{print $1}' | xargs kubectl delete pvc -n harbor # °²ÅÅ helm install myharbor --namespace harbor harbor/harbor --set expose.ingress.hosts.core=myharbor.com --set expose.ingress.hosts.notary=notary.myharbor.com --set-string expose.ingress.annotations.'nginx.org/client-max-body-size'="1024m" --set expose.tls.secretName=myharbor.com --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client --set persistence.persistentVolumeClaim.database.storageClass=nfs-client --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client --set persistence.enabled=true --set externalURL=https://myharbor.com --set harborAdminPassword=Harbor12345
µÇ¼ºó¸´ÖÆ
5¡¢»á¼û harbor
https://myharbor.com
Õ˺Å/ÃÜÂ룺admin/Harbor12345
6¡¢harbor ³£¼û²Ù×÷
¡¾1¡¿½¨ÉèÏîÄ¿ bigdata
¡¾2¡¿ÉèÖÃ˽ÓпÍÕ»
ÔÚÎļþ/etc/docker/daemon.jsonÌí¼ÓÈçÏÂÄÚÈÝ£º
"insecure-registries":["https://myharbor.com"]
µÇ¼ºó¸´ÖÆ
ÖØÆô docker
systemctl restart docker
µÇ¼ºó¸´ÖÆ
¡¾3¡¿Ð§ÀÍÆ÷ÉϵǼ harbor
docker login https://myharbor.com #Õ˺Å/ÃÜÂ룺admin/Harbor12345
µÇ¼ºó¸´ÖÆ
¡¾4¡¿´ò±êÇ©²¢°Ñ¾µÏñÉÏ´«µ½ harbor
docker tag rancher/pause:3.6 myharbor.com/bigdata/pause:3.6 docker push myharbor.com/bigdata/pause:3.6
µÇ¼ºó¸´ÖÆ
7¡¢ÐÞ¸Ä containerd ÉèÖÃ
ÒÔǰʹÓà docker-engine µÄʱ¼ä£¬Ö»ÐèÒªÐÞ¸Ä/etc/docker/daemon.json ¾ÍÐУ¬¿ÉÊÇаæµÄ k8s ÒѾʹÓà containerd ÁË£¬ÒÔÊÇÕâÀïÐèÒª×öÏà¹ØÉèÖã¬Òª²»È» containerd »áʧ°Ü¡£Ö¤Ê飨ca.crt£©¿ÉÒÔÔÚÒ³ÃæÉÏÏÂÔØ£º
½¨ÉèÓòÃûĿ¼
mkdir /etc/containerd/myharbor.com cp ca.crt /etc/containerd/myharbor.com/
µÇ¼ºó¸´ÖÆ
ÉèÖÃÎļþ£º/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry] config_path = "" [plugins."io.containerd.grpc.v1.cri".registry.auths] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".tls] ca_file = "/etc/containerd/myharbor.com/ca.crt" [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".auth] username = "admin" password = "Harbor12345" [plugins."io.containerd.grpc.v1.cri".registry.headers] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."myharbor.com"] endpoint = ["https://myharbor.com"]
µÇ¼ºó¸´ÖÆ
ÖØÆô containerd
#ÖØмÓÔØÉèÖà systemctl daemon-reload #ÖØÆôcontainerd systemctl restart containerd
µÇ¼ºó¸´ÖÆ
¼òÆÓʹÓÃ
# °Ñdocker»»³Écrictl ¾ÍÐУ¬ÏÂÁ²îδ¼¸ crictl pull myharbor.com/bigdata/mysql:5.7.38
µÇ¼ºó¸´ÖÆ
Ö´ÐÐ crictl ±¨ÈçϹýʧµÄ½â¾ö²½·¥
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
µÇ¼ºó¸´ÖÆ
Õâ¸ö±¨´íÊÇ docker µÄ±¨´í£¬ÕâÀïûʹÓã¬ÒÔÊÇÕâ¸ö¹ýʧ²»Ó°ÏìʹÓ㬿ÉÊÇÕվɽâ¾öºÃµã£¬½â¾öÒªÁìÈçÏ£º
cat <<EOF> /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
µÇ¼ºó¸´ÖÆ
ÔÙ´ÎÀÈ¡¾µÏñ
crictl pull myharbor.com/bigdata/mysql:5.7.38
µÇ¼ºó¸´ÖÆ
Kubernetes£¨k8s£©×îаæ×îÍêÕû°æ»ù´¡ÇéÐΰ²ÅÅ+master ¸ß¿ÉÓÃʵÏÖÏêϸ°ì·¨¾Íµ½ÕâÀïÁË£¬ÓÐÒÉÎʵÄСͬ°é½Ó´ý¸øÎÒÁôÑÔŶ~
ÒÔÉϾÍÊÇÏê½â K8S ¸ß¿ÉÓð²ÅÅ£¬³¬Ïêϸ£¡µÄÏêϸÄÚÈÝ£¬¸ü¶àÇë¹Ø×¢±¾ÍøÄÚÆäËüÏà¹ØÎÄÕ£¡