kubernetes1.24从构建到躺平「一」

三台设备:

1台master节点

2台slave节点

OS: Ubuntu Server 22.04


一: 前期环境准备

1) 所有节点安装 apt-transport-https

root@srv1:~# apt install apt-transport-https ca-certificates curl gnupg lsb-release -y


2) 开启所有节点的bridge-nf-call-ip6tables(允许bridge的Netfilter复用IP层的Netfilter代码)

root@srv1:~# echo "br_netfilter" > /etc/modules-load.d/k8s.conf

root@srv1:~# vim /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1

root@srv1:~# sysctl --system


3) 取消并注释fstab中的swap

root@srv1:~# vim /etc/fstab
......
#/dev/disk/by-uuid/aa1f65c9-2728-4763-9f2b-d6d0bc1ee92e none swap sw 0 0 
......
#/swap.img      none    swap    sw      0       0

root@srv1:~# swapoff -a


4) 在所有节点上安装Kubeadm, Kubelet, Kubectl

root@srv1:~# curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/apt-key.gpg --import
root@srv1:~# chmod 644 /etc/apt/trusted.gpg.d/apt-key.gpg
root@srv1:~# echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
root@srv1:~# apt update ; apt install kubeadm kubelet kubectl -y


5) 所有节点安装containerd.io

root@srv1:~# curl -s https://download.docker.com/linux/ubuntu/gpg | gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/apt-key.gpg --import

root@srv1:~# echo "deb https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" >> /etc/apt/sources.list.d/docker.list

root@srv1:~# apt update ; apt install containerd.io -y


6) 所有节点配置containerd.io

root@srv1:~# containerd config default > /etc/containerd/config.toml

root@srv1:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
root@srv1:~# sed -i 's#endpoint = ""#endpoint = "https://3laho3y3.mirror.aliyuncs.com"#g' /etc/containerd/config.toml
root@srv1:~# sed -i 's#sandbox_image = "k8s.gcr.io/pause#sandbox_image = "registry.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml

root@srv1:~# systemctl daemon-reload && systemctl restart containerd.service && reboot


二: 配置Master节点

1) 初始化k8s并指定API Server及pod所用的网络ID

root@srv1:~# kubeadm init --apiserver-advertise-address=192.168.1.11 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
......
......
......
......
......
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.11:6443 --token vkemxu.1zkcqx00umaech8i \
        --discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553


2) 根据提示设置kubeadm的相关环境

root@srv1:~# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> .bashrc
root@srv1:~# export KUBECONFIG=/etc/kubernetes/admin.conf

root@srv1:~# crictl images
IMAGE                                                                                               TAG                  IMAGE ID             SIZE
registry.aliyuncs.com/google_containers/coredns                              v1.8.6                a4ca41631cc7a      13.6MB
registry.aliyuncs.com/google_containers/etcd                                    3.5.3-0              aebe758cef4cd       102MB
registry.aliyuncs.com/google_containers/kube-apiserver                   v1.24.2             d3377ffb7177c       33.8MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.24.2             34cdf99b1bb3b       31MB
registry.aliyuncs.com/google_containers/kube-proxy                        v1.24.2             a634548d10b03       39.5MB
registry.aliyuncs.com/google_containers/kube-scheduler                  v1.24.2             5d725196c1f47       15.5MB
registry.aliyuncs.com/google_containers/pause                                 3.7                     221177c6082a8      311k


3) 配置Pod Flannel Network

# 因网络问题,需先下载flannel镜像。具备版本需求请参看kube-flannel.yml文件
root@srv1:~# crictl pull rancher/mirrored-flannelcni-flannel:v0.18.1

root@srv1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created


4) 查看Master节点状态

root@srv1:~# kubectl get nodes
NAME                  STATUS   ROLES            AGE     VERSION
srv1.1000y.cloud   Ready       control-plane   9m59s   v1.24.2


5) 确认Master节点所有的namespaces

root@srv1:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS   RESTARTS   AGE
kube-system   coredns-74586cf9b6-bnx4v                           1/1           Running   0                    9m52s
kube-system   coredns-74586cf9b6-qdk4w                          1/1           Running   0                    9m52s
kube-system   etcd-srv1.1000y.cloud                                    1/1          Running   0                    10m
kube-system   kube-apiserver-srv1.1000y.cloud                   1/1          Running   0                    10m
kube-system   kube-controller-manager-srv1.1000y.cloud   1/1          Running   0                     10m
kube-system   kube-flannel-ds-qbkzk                                   1/1          Running   0                    2m22s
kube-system   kube-proxy-tm2l6                                          1/1          Running   0                    9m52s
kube-system   kube-scheduler-srv1.1000y.cloud                  1/1          Running   0                    10m


三: 配置Slave节点

1) 将Worker节点加入至k8s cluster中

root@srv2:~# kubeadm join 192.168.1.11:6443 --token vkemxu.1zkcqx00umaech8i \
--discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster
root@srv3:~# kubeadm join 192.168.1.11:6443 --token vkemxu.1zkcqx00umaech8i \
--discovery-token-ca-cert-hash sha256:8c83889acbef5a54b410e8d2513b6eca01ee7eef1244737bacec81168fc5d553
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


2) 集群验证

root@srv1:~# kubectl get nodes
NAME                   STATUS   ROLES           AGE     VERSION
srv1.1000y.cloud   Ready       control-plane   41m      v1.24.2
srv2.1000y.cloud   Ready                   3m48s   v1.24.2
srv3.1000y.cloud   Ready                   3m51s   v1.24.2
发表评论
留言与评论(共有 0 条评论) “”
   
验证码:

相关文章

推荐文章