nutanix VMのディスクをIDEからSCSIに変換してみた。

IDEで作ったNutanix上のVMをscsiに変換してみた。
念のため作業前にVMのsnap shotは取得しておきます。

まずは、Nutanix clusterのCVMに入り対象を確認

# acli
vm.get hoge-vm-01
hoge-vm-01 {
config {
agent_vm: False
allow_live_migrate: True
boot {
device {
disk_addr {
bus: "ide"
index: 1
}
}
uefi_boot: False
}
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "31334ace-7c53-464d-9492-f5ea86e49ce6"
empty: True
}
disk_list {
addr {
bus: "ide" ←変換対象はこれ
index: 1
}
container_id: 66602
container_uuid: "3772c06c-d082-4300-82d7-bf61bbe31af0"
device_uuid: "ffc51c77-bf6e-435b-8740-a9c002d1128e"
naa_id: "naa.6506b8d305671382c9c7b5660af464f1"
source_vmdisk_uuid: "4237633a-d01e-40e1-b5d5-31771e64ee73"
vmdisk_size: 57982058496
vmdisk_uuid: "f4a8ba40-2610-4470-b6f1-11019bb55585"
}
hwclock_timezone: "UTC"
memory_mb: 4096
name: "hoge-vm-01"
nic_list {
mac_addr: "50:6b:8d:88:9e:a1"
network_name: "VLAN2"
network_type: "kNativeNetwork"
network_uuid: "8da3d3f7-109f-447c-95bf-c82219bc93ad"
type: "kNormalNic"
uuid: "58637f42-1938-4245-adee-77f44e3b5a78"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
}
host_name: "172.16.40.6"
host_uuid: "d44d53c3-ffa5-4abe-9d18-07232d3cbfd9"
logical_timestamp: 58
state: "kOn"
uuid: "a01ec35d-23c9-4543-938c-c16f59807473"
}

対象が分かったらideからscsiに変換します。

vm.disk_create hoge-vm-01 bus=scsi clone_from_vmdisk=vm:hoge-vm-01:ide.1
DiskCreate: complete

確認します。

vm.get hoge-vm-01
hoge-vm-01 {
config {
agent_vm: False
allow_live_migrate: True
boot {
device {
disk_addr {
bus: "ide"
index: 1
}
}
uefi_boot: False
}
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "31334ace-7c53-464d-9492-f5ea86e49ce6"
empty: True
}
disk_list {
addr {
bus: "ide"
index: 1
}
container_id: 66602
container_uuid: "3772c06c-d082-4300-82d7-bf61bbe31af0"
device_uuid: "ffc51c77-bf6e-435b-8740-a9c002d1128e"
naa_id: "naa.6506b8d305671382c9c7b5660af464f1"
source_vmdisk_uuid: "4237633a-d01e-40e1-b5d5-31771e64ee73"
vmdisk_size: 57982058496
vmdisk_uuid: "f4a8ba40-2610-4470-b6f1-11019bb55585"
}
disk_list {
addr {
bus: "scsi" ←scsiにてclone
index: 0
}
container_id: 66602
container_uuid: "3772c06c-d082-4300-82d7-bf61bbe31af0"
device_uuid: "cce3f4e3-9b67-499e-a9c5-965ab30207d2"
naa_id: "naa.6506b8dd156861c3f39cfde644856f40"
source_vmdisk_uuid: "f4a8ba40-2610-4470-b6f1-11019bb55585"
vmdisk_size: 57982058496
vmdisk_uuid: "cb4fcb59-b217-4eec-b146-0697ae25afb7"
}
hwclock_timezone: "UTC"
memory_mb: 4096
name: "hoge-vm-01"
nic_list {
mac_addr: "50:6b:8d:88:9e:a1"
network_name: "VLAN2"
network_type: "kNativeNetwork"
network_uuid: "8da3d3f7-109f-447c-95bf-c82219bc93ad"
type: "kNormalNic"
uuid: "58637f42-1938-4245-adee-77f44e3b5a78"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
}
logical_timestamp: 59
state: "kOff"
uuid: "a01ec35d-23c9-4543-938c-c16f59807473"
}

変換できてるようです。

元のディスクを削除します。
タブ連打で出てくるのでacliのコマンドモードは便利です。

vm.disk_delete hoge-vm-01 disk_addr=
ide.0 ide.1 scsi.0
vm.disk_delete hoge-vm-01 disk_addr=ide.1
Delete existing disk? (yes/no) yes
DiskDelete: complete

再度確認

vm.get hoge-vm-01
hoge-vm-01 {
config {
agent_vm: False
allow_live_migrate: True
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "31334ace-7c53-464d-9492-f5ea86e49ce6"
empty: True
}
disk_list {
addr {
bus: "scsi"
index: 0
}
container_id: 66602
container_uuid: "3772c06c-d082-4300-82d7-bf61bbe31af0"
device_uuid: "cce3f4e3-9b67-499e-a9c5-965ab30207d2"
naa_id: "naa.6506b8dd156861c3f39cfde644856f40"
source_vmdisk_uuid: "f4a8ba40-2610-4470-b6f1-11019bb55585"
vmdisk_size: 57982058496
vmdisk_uuid: "cb4fcb59-b217-4eec-b146-0697ae25afb7"
}
hwclock_timezone: "UTC"
memory_mb: 4096
name: "hoge-vm-01"
nic_list {
mac_addr: "50:6b:8d:88:9e:a1"
network_name: "VLAN2"
network_type: "kNativeNetwork"
network_uuid: "8da3d3f7-109f-447c-95bf-c82219bc93ad"
type: "kNormalNic"
uuid: "58637f42-1938-4245-adee-77f44e3b5a78"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
}
logical_timestamp: 60
state: "kOff"
uuid: "a01ec35d-23c9-4543-938c-c16f59807473"

変換完了です。

CentOS 8でKubernetes Clusterを構築してみた_2

前回CentOS 8でKubernetes Clusterを構築してみた_1でmaster nodeの構築が終わったので、Worker Nodeサーバーを構築します。

selinux・firewalld・swapは無効・hostsに追記・docker・kubeadm をインストールする所まではmasterサーバーと同様です。
自分はKVMのcloneでnode用のサーバーを2台構築しました。


前回masterサーバーでkubeadm initを実行して表示されたkubeadm join~の結果をコピペします。

# kubeadm join 192.168.5.101:6443 --token vb94zf.j2un0ry3aunvx9y3 --discovery-token-ca-cert-hash sha256:f42734223e79931159a470fe37a27677c51bcca051feccefbb8a050fbbfcad43
W0318 23:57:21.802582 1546 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path

error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higher

コケました。。

nodeからmasterへのpingの疎通は問題ありません。
masterでtokenを再確認します。

[root@master ~]# kubeadm token list

何も表示されません。
tokenを再作成します。

[root@master ~]# kubeadm token create --ttl 0
W0319 00:23:48.794101 21520 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0319 00:23:48.794139 21520 validation.go:28] Cannot validate kubelet config - no validator is available
77m1r6.thccq5p3w9a0513z
[root@master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
77m1r6.thccq5p3w9a0513z authentication,signing system:bootstrappers:kubeadm:default-node-token

出来たようなので再度nodeサーバーでjoinしてみます。

# kubeadm join 192.168.5.101:6443 --token 77m1r6.thccq5p3w9a0513z --discovery-token-ca-cert-hash sha256:f42734223e79931159a470fe37a27677c51bcca051feccefbb8a050fbbfcad43
W0319 21:59:35.792352 1067 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

うまく行ったようなのでmasterで確認します。

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hoge.net Ready master 2d22h v1.17.4
node-01.hoge.net Ready 21h v1.17.4

無事にnodeが追加されました。

同様にもう一台node用のサーバーを構築しjoinをします。

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hoge.net Ready master 6d22h v1.17.4
node-01.hoge.net Ready 4d21h v1.17.4
node-02.hoge.net Ready 4d v1.17.4

これで、master1台node2台のKubernetes Clusterができました。

CentOS 8でKubernetes Clusterを構築してみた_1

CentOS 8でKubernetes Clusterを構築してみます。
KVMのVMで構築します。
master×1とnode×2の構成です。

selinuxやfirewalldは無効にします。
また、swapが有るとダメなのでswapは無効にしておきます。

# free -m
total used free shared buff/cache available
Mem: 1989 216 754 8 1017 1612
Swap: 2115 0 2115
# swapoff -a
# free -m
total used free shared buff/cache available
Mem: 1989 215 756 8 1016 1614
Swap: 0 0 0

hostsに各サーバーを記載

cat <> /etc/hosts
192.168.5.101 master master.hoge.net
192.168.5.102 node-01 node-01.hoge.net
192.168.5.103 node-02 node-02.hoge.net
EOF

これらは3台すべてで実施します。

準備が出来たらmasterサーバーにdockerをインストールします。
自分が行った手順はこちらcentos8にdockerをインストールしてみた。

dockerがインストール出来たらいよいよkubernetesのインストールです。
リポジトリを追加します。

# cat < /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

kubeadmのインストールです。

# dnf install kubeadm
Kubernetes 378 B/s | 454 B 00:01
Kubernetes 30 kB/s | 1.8 kB 00:00
Importing GPG key 0xA7317B0F:
Userid : "Google Cloud Packages Automatic Signing Key "
Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Importing GPG key 0xBA07F4FB:
Userid : "Google Cloud Packages Automatic Signing Key "
Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Kubernetes 8.3 kB/s | 975 B 00:00
Importing GPG key 0x3E1BA8D5:
Userid : "Google Cloud Packages RPM Signing Key "
Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: y
Kubernetes 48 kB/s | 87 kB 00:01
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
kubeadm x86_64 1.17.4-0 kubernetes 8.7 M
Installing dependencies:
socat x86_64 1.7.3.2-6.el8 AppStream 298 k
conntrack-tools x86_64 1.4.4-9.el8 BaseOS 205 k
libnetfilter_cthelper x86_64 1.0.0-13.el8 BaseOS 24 k
libnetfilter_cttimeout x86_64 1.0.0-11.el8 BaseOS 24 k
libnetfilter_queue x86_64 1.0.2-11.el8 BaseOS 30 k
cri-tools x86_64 1.13.0-0 kubernetes 5.1 M
kubectl x86_64 1.17.4-0 kubernetes 9.4 M
kubelet x86_64 1.17.4-0 kubernetes 20 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M

Transaction Summary
================================================================================
Install 10 Packages

Total download size: 54 M
Installed size: 244 M
Is this ok [y/N]: y
Downloading Packages:
(1/10): libnetfilter_cthelper-1.0.0-13.el8.x86_ 633 kB/s | 24 kB 00:00
(2/10): libnetfilter_cttimeout-1.0.0-11.el8.x86 2.5 MB/s | 24 kB 00:00
(3/10): conntrack-tools-1.4.4-9.el8.x86_64.rpm 3.4 MB/s | 205 kB 00:00
(4/10): libnetfilter_queue-1.0.2-11.el8.x86_64. 1.1 MB/s | 30 kB 00:00
(5/10): socat-1.7.3.2-6.el8.x86_64.rpm 2.5 MB/s | 298 kB 00:00
(6/10): 0767753f85f415bbdf1df0e974eafccb653bee0 16 MB/s | 8.7 MB 00:00
(7/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67 5.2 MB/s | 5.1 MB 00:00
(8/10): 06400b25ef3577561502f9a7a126bf4975c03b3 9.9 MB/s | 9.4 MB 00:00
(9/10): 548a0dcd865c16a50980420ddfa5fbccb8b5962 10 MB/s | 10 MB 00:00
(10/10): 0c45baca5fcc05bb75f1e953ecaf85844efac0 8.0 MB/s | 20 MB 00:02
--------------------------------------------------------------------------------
Total 13 MB/s | 54 MB 00:04
warning: /var/cache/dnf/kubernetes-33343725abd9cbdc/packages/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Kubernetes 28 kB/s | 1.8 kB 00:00
Importing GPG key 0xA7317B0F:
Userid : "Google Cloud Packages Automatic Signing Key "
Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Key imported successfully
Importing GPG key 0xBA07F4FB:
Userid : "Google Cloud Packages Automatic Signing Key "
Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Key imported successfully
Kubernetes 9.2 kB/s | 975 B 00:00
Importing GPG key 0x3E1BA8D5:
Userid : "Google Cloud Packages RPM Signing Key "
Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : kubectl-1.17.4-0.x86_64 1/10
Installing : cri-tools-1.13.0-0.x86_64 2/10
Installing : libnetfilter_queue-1.0.2-11.el8.x86_64 3/10
Running scriptlet: libnetfilter_queue-1.0.2-11.el8.x86_64 3/10
Installing : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Installing : libnetfilter_cthelper-1.0.0-13.el8.x86_64 5/10
Running scriptlet: libnetfilter_cthelper-1.0.0-13.el8.x86_64 5/10
Installing : conntrack-tools-1.4.4-9.el8.x86_64 6/10
Running scriptlet: conntrack-tools-1.4.4-9.el8.x86_64 6/10
Installing : socat-1.7.3.2-6.el8.x86_64 7/10
Installing : kubernetes-cni-0.7.5-0.x86_64 8/10
Installing : kubelet-1.17.4-0.x86_64 9/10
Installing : kubeadm-1.17.4-0.x86_64 10/10
Running scriptlet: kubeadm-1.17.4-0.x86_64 10/10
Verifying : socat-1.7.3.2-6.el8.x86_64 1/10
Verifying : conntrack-tools-1.4.4-9.el8.x86_64 2/10
Verifying : libnetfilter_cthelper-1.0.0-13.el8.x86_64 3/10
Verifying : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 4/10
Verifying : libnetfilter_queue-1.0.2-11.el8.x86_64 5/10
Verifying : cri-tools-1.13.0-0.x86_64 6/10
Verifying : kubeadm-1.17.4-0.x86_64 7/10
Verifying : kubectl-1.17.4-0.x86_64 8/10
Verifying : kubelet-1.17.4-0.x86_64 9/10
Verifying : kubernetes-cni-0.7.5-0.x86_64 10/10

Installed:
kubeadm-1.17.4-0.x86_64
socat-1.7.3.2-6.el8.x86_64
conntrack-tools-1.4.4-9.el8.x86_64
libnetfilter_cthelper-1.0.0-13.el8.x86_64
libnetfilter_cttimeout-1.0.0-11.el8.x86_64
libnetfilter_queue-1.0.2-11.el8.x86_64
cri-tools-1.13.0-0.x86_64
kubectl-1.17.4-0.x86_64
kubelet-1.17.4-0.x86_64
kubernetes-cni-0.7.5-0.x86_64

Complete!

kubeletの自動起動を有効にして起動。

systemctl enable kubelet
/etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
# systemctl start kubelet

kubeadmを起動します。

# kubeadm init
W0316 23:27:57.301449 4904 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0316 23:27:57.301498 4904 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.hogehoge.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.hogehoge.net localhost] and IPs [192.168.5.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.hogehoge.net localhost] and IPs [192.168.5.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0316 23:28:22.896741 4904 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0316 23:28:22.897850 4904 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.502341 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master.hogehoge.net as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.hogehoge.net as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vb94zf.j2un0ry3aunvx9y3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.5.101:6443 --token vb94zf.j2un0ry3aunvx9y3 \
--discovery-token-ca-cert-hash sha256:f42734223e79931159a470fe37a27677c51bcca051feccefbb8a050fbbfcad43

kubeadm join 192.168.5.101:6443 --token~のコマンドはnode追加の時に必要です。

上のログがおしえてくれてる必要なディレクトリやconfigを作ります。

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

ここまで来たらkubectlが使えるか確認します。

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net NotReady master 3m2s v1.17.4

NotReadyとなってますが、この時点では問題ないです。

それでは、podネットワークとやらを作ります。

# export kubever=$(kubectl version | base64 | tr -d '\ n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever
> "
error: unable to read URL "https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcNpb246IHZlcNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNyIsIEdpdFZl", server reported 400 Bad Request, status code=400
https://github.com/weaveworks/weave/issues/3048

コケました・・・。
バージョンを食わして実行しているようなので

バージョン確認

# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

kubernetesのバージョンはv1.17.4なようなので

直打ちで挑戦

# kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/v1.17.4/net"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

出来たようです。
確認します。

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net NotReady master 58m v1.17.4

しばらくすると・・・。

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.hogehoge.net Ready master 59m v1.17.4

Readyになりました!
取り合えずkubernetesのmaster nodeの構築は出来たようです。

CentOS8でdockerをインストールしてみた。

CentOS8でdockerをインストールしてみた。

OSはこれ

# cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)

Cent7では普通にyumで入ったのに入りません。。

# dnf install docker
CentOS-8 - AppStream 4.3 kB/s | 4.3 kB 00:00
CentOS-8 - Base 4.3 kB/s | 3.8 kB 00:00
CentOS-8 - Extras 1.6 kB/s | 1.5 kB 00:00
No match for argument: docker
Error: Unable to find a match: docker

docker-ceリポジトリを入れます。

# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo

インストールしてみます。

# dnf install docker-ce
Last metadata expiration check: 0:03:38 ago on Mon 16 Mar 2020 10:13:35 PM JST.
Error:
Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
- cannot install the best candidate for the job
- package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
- package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
- package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
- package containerd.io-1.2.2-3.el7.x86_64 is excluded
- package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
- package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
- package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

バージョンで蹴られてしまいました。

なのでcontainerd.ioのリポジトリをいれます。

# dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
Last metadata expiration check: 0:05:34 ago on Mon 16 Mar 2020 10:13:35 PM JST.
containerd.io-1.2.6-3.3.el7.x86_64.rpm 34 MB/s | 26 MB 00:00
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
containerd.io x86_64 1.2.6-3.3.el7 @commandline 26 M
Installing dependencies:
container-selinux noarch 2:2.124.0-1.module_el8.1.0+272+3e64ee36
AppStream 47 k
checkpolicy x86_64 2.9-1.el8 BaseOS 348 k
policycoreutils-python-utils noarch 2.9-3.el8_1.1 BaseOS 250 k
python3-audit x86_64 3.0-0.10.20180831git0047a6c.el8
BaseOS 85 k
python3-libsemanage x86_64 2.9-1.el8 BaseOS 127 k
python3-policycoreutils noarch 2.9-3.el8_1.1 BaseOS 2.2 M
python3-setools x86_64 4.2.2-1.el8 BaseOS 600 k
Enabling module streams:
container-tools rhel8

Transaction Summary
略~

それでは、docker インストールです。

# dnf install docker-ce
Last metadata expiration check: 0:06:41 ago on Mon 16 Mar 2020 10:13:35 PM JST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
docker-ce x86_64 3:19.03.8-3.el7 docker-ce-stable 25 M
Installing dependencies:
libcgroup x86_64 0.41-19.el8 BaseOS 70 k
docker-ce-cli x86_64 1:19.03.8-3.el7 docker-ce-stable 40 M

Transaction Summary
================================================================================
Install 3 Packages

Total download size: 64 M
Installed size: 273 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): libcgroup-0.41-19.el8.x86_64.rpm 1.0 MB/s | 70 kB 00:00
(2/3): docker-ce-19.03.8-3.el7.x86_64.rpm 42 MB/s | 25 MB 00:00
(3/3): docker-ce-cli-19.03.8-3.el7.x86_64.rpm 39 MB/s | 40 MB 00:01
--------------------------------------------------------------------------------
Total 42 MB/s | 64 MB 00:01
warning: /var/cache/dnf/docker-ce-stable-091d8a9c23201250/packages/docker-ce-19.03.8-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Docker CE Stable - x86_64 20 kB/s | 1.6 kB 00:00
Importing GPG key 0x621E9F35:
Userid : "Docker Release (CE rpm) "
Fingerprint: 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35
From : https://download.docker.com/linux/centos/gpg
Is this ok [y/N]: y
略~
Installed:
docker-ce-3:19.03.8-3.el7.x86_64 libcgroup-0.41-19.el8.x86_64
docker-ce-cli-1:19.03.8-3.el7.x86_64

Complete!

# systemctl enable docker
# systemctl start docker

無事にCentOS8にdockerをインストールできました。

NFS Stale file handleと出た時の対処

NFS領域をmountしていて何かしらの理由でNFS領域にアクセスできない状態になった。
dfで見ても表示されない。。
再mountを試すも

# mount.nfs4 nfs-server:/hogehoge_1 /opt/hogehoge_1
mount.nfs4: Stale file handle

普通にumountしようとしても

device is busy

と表示されてumount出来なかった。

仕方が無いので強制umountして復旧

# umount -l /opt/hogehoge_1

kvmでVMのsnap shot 取得とリストアメモ

KVM環境でのVMのsnap shot取得とリストアメモ

スナップショット取得

# virsh snapshot-create-as --domain hogehoge.net --name 20200308
ドメインのスナップショット 20200308 が作成されました

スナップショット取得確認

# virsh snapshot-list --domain hogehoge.net
名前 作成時間 状態
------------------------------------------------------------
20200222 2020-02-22 23:50:12 +0900 running
20200308 2020-03-08 00:01:31 +0900 running

次にsnapshotを戻します。
まずは取得してるsnapshot確認です。

# virsh snapshot-list hogehoge.net
名前 作成時間 状態
------------------------------------------------------------
20200222 2020-02-22 23:50:12 +0900 running
20200308 2020-03-08 00:01:31 +0900 running

20200308のsnapshotに戻します。

# virsh snapshot-revert hogehoge.net 20200308

仮想化便利ですね。