CentOS 8でKubernetes Clusterを構築してみた_1

CentOS 8でKubernetes Clusterを構築してみます。
KVMのVMで構築します。
master×1とnode×2の構成です。

selinuxやfirewalldは無効にします。
また、swapが有るとダメなのでswapは無効にしておきます。

# free -m
total used free shared buff/cache available
Mem: 1989 216 754 8 1017 1612
Swap: 2115 0 2115
# swapoff -a
# free -m
total used free shared buff/cache available
Mem: 1989 215 756 8 1016 1614
Swap: 0 0 0

hostsに各サーバーを記載

cat <> /etc/hosts
192.168.5.101 master master.hoge.net
192.168.5.102 node-01 node-01.hoge.net
192.168.5.103 node-02 node-02.hoge.net
EOF

これらは3台すべてで実施します。

準備が出来たらmasterサーバーにdockerをインストールします。
自分が行った手順はこちらcentos8にdockerをインストールしてみた。

dockerがインストール出来たらいよいよkubernetesのインストールです。
リポジトリを追加します。

# cat < /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

kubeadmのインストールです。

# dnf install kubeadm
Kubernetes 378 B/s | 454 B 00:01
Kubernetes 30 kB/s | 1.8 kB 00:00
Importing GPG key 0xA7317B0F:
Userid : “Google Cloud Packages Automatic Signing Key
Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Importing GPG key 0xBA07F4FB:
Userid : “Google Cloud Packages Automatic Signing Key
Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Kubernetes 8.3 kB/s | 975 B 00:00
Importing GPG key 0x3E1BA8D5:
Userid : “Google Cloud Packages RPM Signing Key
Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: y
Kubernetes 48 kB/s | 87 kB 00:01
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
kubeadm x86_64 1.17.4-0 kubernetes 8.7 M
Installing dependencies:
socat x86_64 1.7.3.2-6.el8 AppStream 298 k
conntrack-tools x86_64 1.4.4-9.el8 BaseOS 205 k
libnetfilter_cthelper x86_64 1.0.0-13.el8 BaseOS 24 k
libnetfilter_cttimeout x86_64 1.0.0-11.el8 BaseOS 24 k
libnetfilter_queue x86_64 1.0.2-11.el8 BaseOS 30 k
cri-tools x86_64 1.13.0-0 kubernetes 5.1 M
kubectl x86_64 1.17.4-0 kubernetes 9.4 M
kubelet x86_64 1.17.4-0 kubernetes 20 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M

Transaction Summary
================================================================================
Install 10 Packages

Total download size: 54 M
Installed size: 244 M
Is this ok [y/N]: y
Downloading Packages:
(1/10): libnetfilter_cthelper-1.0.0-13.el8.x86_ 633 kB/s | 24 kB 00:00
(2/10): libnetfilter_cttimeout-1.0.0-11.el8.x86 2.5 MB/s | 24 kB 00:00
(3/10): conntrack-tools-1.4.4-9.el8.x86_64.rpm 3.4 MB/s | 205 kB 00:00
(4/10): libnetfilter_queue-1.0.2-11.el8.x86_64. 1.1 MB/s | 30 kB 00:00
(5/10): socat-1.7.3.2-6.el8.x86_64.rpm 2.5 MB/s | 298 kB 00:00
(6/10): 0767753f85f415bbdf1df0e974eafccb653bee0 16 MB/s | 8.7 MB 00:00
(7/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67 5.2 MB/s | 5.1 MB 00:00
(8/10): 06400b25ef3577561502f9a7a126bf4975c03b3 9.9 MB/s | 9.4 MB 00:00
(9/10): 548a0dcd865c16a50980420ddfa5fbccb8b5962 10 MB/s | 10 MB 00:00
(10/10): 0c45baca5fcc05bb75f1e953ecaf85844efac0 8.0 MB/s | 20 MB 00:02
——————————————————————————–
Total 13 MB/s | 54 MB 00:04
warning: /var/cache/dnf/kubernetes-33343725abd9cbdc/packages/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Kubernetes 28 kB/s | 1.8 kB 00:00
Importing GPG key 0xA7317B0F:
Userid : “Google Cloud Packages Automatic Signing Key
Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Key imported successfully
Importing GPG key 0xBA07F4FB:
Userid : “Google Cloud Packages Automatic Signing Key
Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Key imported successfully
Kubernetes 9.2 kB/s | 975 B 00:00
Importing GPG key 0x3E1BA8D5:
Userid : “Google Cloud Packages RPM Signing Key
Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : kubectl-1.17.4-0.x86_64 1/10
Installing : cri-tools-1.13.0-0.x86_64 2/10
Installing : libnetfilter_queue-1.0.2-11.el8.x86_64 3/10
Running scriptlet: libnetfilter_queue-1.0.2-11.el8.x86_64 3/10
Installing : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Installing : libnetfilter_cthelper-1.0.0-13.el8.x86_64 5/10
Running scriptlet: libnetfilter_cthelper-1.0.0-13.el8.x86_64 5/10
Installing : conntrack-tools-1.4.4-9.el8.x86_64 6/10
Running scriptlet: conntrack-tools-1.4.4-9.el8.x86_64 6/10
Installing : socat-1.7.3.2-6.el8.x86_64 7/10
Installing : kubernetes-cni-0.7.5-0.x86_64 8/10
Installing : kubelet-1.17.4-0.x86_64 9/10
Installing : kubeadm-1.17.4-0.x86_64 10/10
Running scriptlet: kubeadm-1.17.4-0.x86_64 10/10
Verifying : socat-1.7.3.2-6.el8.x86_64 1/10
Verifying : conntrack-tools-1.4.4-9.el8.x86_64 2/10
Verifying : libnetfilter_cthelper-1.0.0-13.el8.x86_64 3/10
Verifying : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Verifying : libnetfilter_queue-1.0.2-11.el8.x86_64 5/10
Verifying : cri-tools-1.13.0-0.x86_64 6/10
Verifying : kubeadm-1.17.4-0.x86_64 7/10
Verifying : kubectl-1.17.4-0.x86_64 8/10
Verifying : kubelet-1.17.4-0.x86_64 9/10
Verifying : kubernetes-cni-0.7.5-0.x86_64 10/10

Installed:
kubeadm-1.17.4-0.x86_64
socat-1.7.3.2-6.el8.x86_64
conntrack-tools-1.4.4-9.el8.x86_64
libnetfilter_cthelper-1.0.0-13.el8.x86_64
libnetfilter_cttimeout-1.0.0-11.el8.x86_64
libnetfilter_queue-1.0.2-11.el8.x86_64
cri-tools-1.13.0-0.x86_64
kubectl-1.17.4-0.x86_64
kubelet-1.17.4-0.x86_64
kubernetes-cni-0.7.5-0.x86_64

Complete!

kubeletの自動起動を有効にして起動。

systemctl enable kubelet
/etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
# systemctl start kubelet

kubeadmを起動します。

# kubeadm init
W0316 23:27:57.301449 4904 validation.go:28] Cannot validate kube-proxy config – no validator is available
W0316 23:27:57.301498 4904 validation.go:28] Cannot validate kubelet config – no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [master.hogehoge.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.101]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.hogehoge.net localhost] and IPs [192.168.5.101 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.hogehoge.net localhost] and IPs [192.168.5.101 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
W0316 23:28:22.896741 4904 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-scheduler”
W0316 23:28:22.897850 4904 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.502341 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.17” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node master.hogehoge.net as control-plane by adding the label “node-role.kubernetes.io/master=””
[mark-control-plane] Marking the node master.hogehoge.net as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vb94zf.j2un0ry3aunvx9y3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.5.101:6443 –token vb94zf.j2un0ry3aunvx9y3 \
–discovery-token-ca-cert-hash sha256:f42734223e79931159a470fe37a27677c51bcca051feccefbb8a050fbbfcad43

kubeadm join 192.168.5.101:6443 –token~のコマンドはnode追加の時に必要です。

上のログがおしえてくれてる必要なディレクトリやconfigを作ります。

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

ここまで来たらkubectlが使えるか確認します。

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net NotReady master 3m2s v1.17.4

NotReadyとなってますが、この時点では問題ないです。

それでは、podネットワークとやらを作ります。

# export kubever=$(kubectl version | base64 | tr -d ‘\ n’)
# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever
> ”
error: unable to read URL “https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcNpb246IHZlcNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZl”, server reported 400 Bad Request, status code=400
https://github.com/weaveworks/weave/issues/3048

コケました・・・。
バージョンを食わして実行しているようなので

バージョン確認

# kubectl version
Client Version: version.Info{Major:”1″, Minor:”17″, GitVersion:”v1.17.4″, GitCommit:”8d8aa39598534325ad77120c120a22b3a990b5ea”, GitTreeState:”clean”, BuildDate:”2020-03-12T21:03:42Z”, GoVersion:”go1.13.8″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”17″, GitVersion:”v1.17.4″, GitCommit:”8d8aa39598534325ad77120c120a22b3a990b5ea”, GitTreeState:”clean”, BuildDate:”2020-03-12T20:55:23Z”, GoVersion:”go1.13.8″, Compiler:”gc”, Platform:”linux/amd64″}

kubernetesのバージョンはv1.17.4なようなので

直打ちで挑戦

# kubectl apply -n kube-system -f “https://cloud.weave.works/k8s/v1.17.4/net”
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

出来たようです。
確認します。

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net NotReady master 58m v1.17.4

しばらくすると・・・。

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net Ready master 59m v1.17.4

Readyになりました!
取り合えずkubernetesのmaster nodeの構築は出来たようです。

タグ . ブックマークする パーマリンク.

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です