Room9

Kubernetes Cluster (3) - 고가용성(H/A)를 고려한 클러스터 구성 본문

Kubernetes

Kubernetes Cluster (3) - 고가용성(H/A)를 고려한 클러스터 구성

Room9_ 2022. 4. 14. 17:15

Overview

앞서 구성했던 클러스터들은 단일 클러스터 또는 Master 1 / Worker 2 의 테스트환경이 대부분이였습니다.

실제 프로덕션환경에서 해당 클러스터는 가용성을 보장 받지 못하기 때문에 H/A를 고려한 Kubernetes Cluster 구축을 진행 하도록 하겠습니다.

Detail

  • CRI install
  • kuberadm install
  • Loadbalancer
  • kubernetes H/A install

Prerequisites

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab

CRI install

sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
sudo systemctl restart containerd

 

 


Kubeadm install

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet=1.21.7-00 kubeadm=1.21.7-00 kubectl=1.21.7-00 &&
sudo apt-mark hold kubelet kubeadm kubectl

설치확인

kubeadm version
kubelet --version
kubectl version --client

LoadBalancer config

- NAVER Cloud Proxy_TCP LoadBalancer

Network_Proxy로 LB 구성
Targer


Kubernetes H/A install

master1

sudo kubeadm init --pod-network-cidr=10.32.0.0/12 --control-plane-endpoint=192.168.200.6 --upload-certs

오류사항

1.

error execution phase preflight: [preflight] Some fatal errors occurred:

    [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to ~

[[아래명령어 입력후 재조인]]

echo '1' > /proc/sys/net/ipv4/ip_forward

2.

error execution phase preflight: [preflight] Some fatal errors occurred:

    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

[[아래명령어 입력후 재조인]]

echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

3.

pod-network-cidr 미설정 시

kubeadm 초기화 시 pod-network-cidr 미설정 시 pod 들의 네트워크가 ip대역을 할당 받지 못하여 cni 설치를 진행하여도 coredns가 pending에서 Stuck 상태로 빠지게 된다. 여러가지 cni 들이 독자적인 cidr을 가지고 있는듯 하며, 이후 cni를 weavenet을 사용하기 위하여 10.32.0.0/12로 할당하였다. 해당 옵션으로 인하여 문제가 해결된것이라고 확신할 수 없으며, 클러스터 구성시에 컨테이너들의 네트워크 문제로 확인 된다. 

 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.200.6:6443 --token 5zmosh.sa9one1rn6fzq1h6 \
        --discovery-token-ca-cert-hash sha256:817cb84b822855a8feb8bb2508f9cfa696c03911e1bce09553f556cd683239df \
        --control-plane --certificate-key 536feb5cc2b22570f2bf1c25b49db4188d8a2420779fc74fbe7c5a737094d504

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.6:6443 --token 5zmosh.sa9one1rn6fzq1h6 \
        --discovery-token-ca-cert-hash sha256:817cb84b822855a8feb8bb2508f9cfa696c03911e1bce09553f556cd683239df

master 2,3

kubeadm join 192.168.200.6:6443 --token 5zmosh.sa9one1rn6fzq1h6 \
        --discovery-token-ca-cert-hash sha256:817cb84b822855a8feb8bb2508f9cfa696c03911e1bce09553f556cd683239df \
        --control-plane --certificate-key 536feb5cc2b22570f2bf1c25b49db4188d8a2420779fc74fbe7c5a737094d504

worker 1,2

kubeadm join 192.168.200.6:6443 --token 5zmosh.sa9one1rn6fzq1h6 \
        --discovery-token-ca-cert-hash sha256:817cb84b822855a8feb8bb2508f9cfa696c03911e1bce09553f556cd683239df

master1

kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
kube-cp-001   NotReady   control-plane,master   4m21s   v1.21.7
kube-cp-002   NotReady   control-plane,master   93s     v1.21.7
kube-cp-003   NotReady   control-plane,master   92s     v1.21.7
kube-wk-001   NotReady   <none>                 17s     v1.21.7
kube-wk-002   NotReady   <none>                 14s     v1.21.7

CNI

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Reference

 

 

Comments