Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
The general approach is to generate all certs on one node and only distribute the necessary files to the other nodes.
The following scripts require root access or sudo session.
Configure the kubelet to be a service manager for etcd.
Since etcd was created first, you must override the service priority by creating a new unit file that has higher precedence than the kubeadm-provided kubelet unit file.
NOTES: Do this on all nodes.
mkdir -p /etc/systemd/system/kubelet.service.d/
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=cgroupfs
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
Create configuration files for kubeadm.
Generate one kubeadm configuration file for each host that will have an etcd member running on it using the following script.
NOTES: Do this on HOST0 only.
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
export HOST0=192.168.1.41 # Our etcd1.k8s-ha-lab.attawit.me
export HOST1=192.168.1.42 # Our etcd2.k8s-ha-lab.attawit.me
export HOST2=192.168.1.43 # Our etcd3.k8s-ha-lab.attawit.me
# Create temp directories to store files that will end up on other hosts.
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("etcd1" "etcd2" "etcd3")
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: <https://$>{HOST}:2380
listen-client-urls: <https://$>{HOST}:2379
advertise-client-urls: <https://$>{HOST}:2379
initial-advertise-peer-urls: <https://$>{HOST}:2380
EOF
done
Generate the certificate authority
If you already have a CA then the only action that is copying the CA’s crt and key file to /etc/kubernetes/pki/etcd/ca.crt and /etc/kubernetes/pki/etcd/ca.key. After those files have been copied, proceed to the next step, "Create certificates for each member".
If you do not already have a CA then run this command on $HOST0 (where you generated the configuration files for kubeadm).
NOTES: Do this on HOST0 only.
kubeadm init phase certs etcd-ca
This creates two files
/etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.keyCreate certificates for each member
NOTES: Do this on HOST0 only.
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
# clean up certs that should not be copied off this host
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
Copy certificates and kubeadm configs
The certificates have been generated and now they must be moved to their respective hosts.
NOTES: Do this from HOST0 only.
USER=user
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/
root@HOST $ exit
USER@HOST $ exit
USER=user
HOST=${HOST2}
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/
root@HOST $ exit
USER@HOST $ exit
Ensure all expected files exist
The complete list of required files on $HOST0 is:
/tmp/${HOST0}
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── ca.key
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
On $HOST1:
$HOME
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
On $HOST2:
$HOME
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key
Create the static pod manifests
Now that the certificates and configs are in place it’s time to create the manifests. On each host run the kubeadm command to generate a static manifest for etcd.
root@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
root@HOST1 $ kubeadm init phase etcd local --config=/home/user/kubeadmcfg.yaml
root@HOST2 $ kubeadm init phase etcd local --config=/home/user/kubeadmcfg.yaml
docker run --rm -it \\
--net host \\
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:${ETCD_TAG} etcdctl \\
--cert /etc/kubernetes/pki/etcd/peer.crt \\
--key /etc/kubernetes/pki/etcd/peer.key \\
--cacert /etc/kubernetes/pki/etcd/ca.crt \\
--endpoints <https://$>{HOST0}:2379 endpoint health --cluster
Set ${ETCD_TAG} to the version tag of your etcd image. For example 3.4.3-0. To see the etcd image and tag that kubeadm uses execute kubeadm config images list --kubernetes-version ${K8S_VERSION}, where ${K8S_VERSION} is for example v1.18.2. To see the k8s version, either execute kubelet --version or kubeadm version
Set ${HOST0} to the IP address of the host you are testing.
We could also use HOST1 and HOST2 as well as list them all in the endpoints.
i.e. https://${HOST0}:2379,<https://$>{HOST1}:2379,<https://$>{HOST2}:2379
If health check failed, check if nodes can talk to each other on port 2379 and 2380.
Before moving on to the next section. We need to take note on a few things
To get content of CA Cert, Peer Cert, Peer Private Key we do the following:
# CA Cert
user@HOST0 $ cat /etc/kubernetes/pki/etcd/ca.crt
# Peer Cert
user@HOST0 $ cat /etc/kubernetes/pki/etcd/peer.crt
# Peer Private Key (requires root access)
root@HOST0 $ cat /etc/kubernetes/pki/etcd/peer.key