Centos Kubeadm Requesting Info From Again to Validate Tls Against the Pinned Public Key

System Requirements:

Install Ubuntu 18.04 on three machines by using Virtualbox, the system requirements should be as follows,

Kubernetes Main Node:  VCPU = 2 Retentiveness two GB Hostname = k8-chief (optional) Network Adapter: Bridge Networking Static IP: 192.168.0.100/24    Kubernetes Client Node i:  VCPU = 1 Memory = ane GB Hostname = k8-node1 (optional) Network Adapter: Bridge Networking Static IP: 192.168.0.101/24     Kubernetes Client Node ii:  VCPU = ane Memory = 1 GB Hostname = k8-node2 (optional) Network Adapter: Bridge Networking Static IP: 192.168.0.103/24        

Note: Why we are setting STATIC IP ?

If you desire to piece of work/utilize the Kubernetes cluster which nosotros are going to create beyond reboots so set the static IP addresses, so that the nodes in cluster know each other even afterward the reboot.

Here is the basic architecture diagram of what we are going to create in this article,

No alt text provided for this image

Purpose of packages:

Kubeadm - is but an boosted tool that simplifies the process setting upwards kubernetes cluster.

Kubectl - is a command line tool that we can use to interact with the cluster.

Kubelet - is an amanuensis that manages the process of running containers on each node

Pace 1:

Install Docker on k8-node1, k8-node2 and k8-master node

# sudo apt install curl vim cyberspace-tools openssh-server  # sudo swapoff /swapfile  # sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -  # sudo add-apt-repository \     "deb [curvation=amd64] https://download.docker.com/linux/ubuntu \     $(lsb_release -cs) \     stable"  # sudo apt-get update  # sudo apt-get install -y docker-ce=xviii.06.1~ce~3-0~ubuntu  # sudo apt-mark concord docker-ce        

Follow the in a higher place commands on all three nodes.

Then, Verify the docker installation on all of them,

# sudo docker version        

Pace 2:

Install Kubeadm, Kubelet and Kubectl on k8-node1, k8-node2 and k8-master node

# sudo curlicue -due south https://packages.cloud.google.com/apt/dr./apt-key.gpg | sudo apt-primal add -  # cat << EOF | sudo tee /etc/apt/sources.listing.d/kubernetes.list  deb https://apt.kubernetes.io/ kubernetes-xenial main  EOF  # cat /etc/apt/sources.list.d/kubernetes.listing  deb https://apt.kubernetes.io/ kubernetes-xenial chief  # sudo apt-get update  # sudo apt-get install -y kubelet=ane.12.vii-00 kubeadm=1.12.7-00 kubectl=one.12.7-00  # sudo apt-marker hold kubelet kubeadm kubectl        

Follow the above commands on all three nodes.

After installing these components, verify that Kubeadm is working by getting the version info.

# sudo kubeadm version        

Notation:

At this indicate, only "kubeadm version" will piece of work, the other two commands "kubectl version" and "kubelet version" will non work.

Step 3:

Bootstrapping the cluster:

Now we will bootstrap the cluster on the Kube master node. Then, we will join each of the two worker nodes to the cluster, forming an actual multi-node Kubernetes cluster.

On the Kube main node(k8-master), initialize the cluster:

# sudo kubeadm init --pod-network-cidr=x.244.0.0/16        

Note: For any reason if the above command fails, then you have to reset the config by using below command, before yous initiate the new cluster process again.

# sudo kubeadm reset        

When information technology is done, gear up the local kubeconfig:

karthi@k8-master:~$ mkdir -p $HOME/.kube  karthi@k8-chief:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  karthi@k8-master:~$ sudo chown $(id -u):$(id -chiliad) $HOME/.kube/config   karthi@k8-principal:~$ ls -fifty $Dwelling/.kube/config  -rw------- 1 karthi karthi 5449 Dec  2 12:34 /dwelling house/karthi/.kube/config        

Verify that the cluster is responsive and that Kubectl is working:

# kubectl version        

You should become Server Version as well as Customer Version. It should await something like this:

karthi@k8-primary:~$ kubectl version  Client Version: version.Info{Major:"ane", Pocket-sized:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:52:13Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}  Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.10", GitCommit:"e3c134023df5dea457638b614ee17ef234dc34a6", GitTreeState:"make clean", BuildDate:"2019-07-08T03:xl:54Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}        

The kubeadm init command should output a kubeadm join command containing a token and hash. Copy that control and run on both worker nodes.

Step 4:

Connect the customer nodes to Chief

On Node - 1(k8-node1):

karthi@k8-node2:~$ sudo kubeadm bring together 192.168.0.100:6443 --token 2wr5dj.ew1h24xlc6ng0swa --discovery-token-ca-cert-hash sha256:c8b61afecb11dcb9b01147169da79714c472aace219faa5fa1ee383a9a21e1f4  [sudo] countersign for karthi:  [preflight] running pre-flying checks      [Alert RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the post-obit required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{}]  you lot can solve this trouble with following methods:  1. Run 'modprobe -- ' to load missing kernel modules;  2. Provide the missing builtin kernel ipvs support     [discovery] Trying to connect to API Server "192.168.0.100:6443"  [discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"  [discovery] Requesting info from "https://192.168.0.110:6443" again to validate TLS against the pinned public fundamental  [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, volition apply API Server "192.168.0.100:6443"  [discovery] Successfully established connection with API Server "192.168.0.100:6443"  [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace  [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [preflight] Activating the kubelet service  [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...  [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8-node2" as an notation      This node has joined the cluster:  * Certificate signing request was sent to apiserver and a response was received.  * The Kubelet was informed of the new secure connection details.    Run 'kubectl become nodes' on the master to come across this node join the cluster.        

Note:For any reason if the higher up control fails, then run 'sudo kubeadm reset' and try to bring together to cluster once more.

On Node - two(k8-node2):

karthi@k8-node1:~$ sudo kubeadm join 192.168.0.100:6443 --token 2wr5dj.ew1h24xlc6ng0swa --discovery-token-ca-cert-hash sha256:c8b61afecb11dcb9b01147169da79714c472aace219faa5fa1ee383a9a21e1f4  [sudo] password for karthi:  [preflight] running pre-flying checks      [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier volition not be used, because the post-obit required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]  you can solve this problem with post-obit methods:  1. Run 'modprobe -- ' to load missing kernel modules;  two. Provide the missing builtin kernel ipvs support     [discovery] Trying to connect to API Server "192.168.0.100:6443"  [discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"  [discovery] Requesting info from "https://192.168.0.100:6443" again to validate TLS against the pinned public key  [discovery] Cluster info signature and contents are valid and TLS document validates against pinned roots, will utilize API Server "192.168.0.100:6443"  [discovery] Successfully established connection with API Server "192.168.0.100:6443"  [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-organisation namespace  [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  [kubelet] Writing kubelet surroundings file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [preflight] Activating the kubelet service  [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...  [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8-node1" every bit an note        This node has joined the cluster:  * Certificate signing request was sent to apiserver and a response was received.  * The Kubelet was informed of the new secure connection details.        Run 'kubectl go nodes' on the master to see this node join the cluster.        

Note: For any reason if the above control fails, then run 'sudo kubeadm reset' and try to join to cluster again.

Now from Chief node, nosotros tin can verify that all nodes take successfully joined the cluster:

karthi@k8-master:~$ sudo kubectl get nodes  NAME        STATUS     ROLES    AGE     VERSION k8-master   NotReady   master   7m18s   v1.12.7 k8-node1    NotReady   <none>   2m14s   v1.12.7  k8-node2    NotReady   <none>   2m24s   v1.12.7        

Note: The nodes are expected to have a STATUS of NotReady at this indicate.

Pace 5:

Configuring Networking:

At his indicate, we have setup the Kubernetes cluster, only nosotros still demand to configure cluster networking in order to brand the cluster fully functional.

On all 3 nodes(k8-master, k8-node1, k8-node2), run the following:

# echo "internet.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf  # sudo sysctl -p        

Install Flannel in the cluster by running this merely on the Master node:

karthi@k8-master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml  clusterrole.rbac.authority.k8s.io/flannel created  clusterrolebinding.rbac.authorization.k8s.io/flannel created  serviceaccount/flannel created  configmap/kube-flannel-cfg created  daemonset.extensions/kube-flannel-ds-amd64 created  daemonset.extensions/kube-flannel-ds-arm64 created  daemonset.extensions/kube-flannel-ds-arm created  daemonset.extensions/kube-flannel-ds-ppc64le created   daemonset.extensions/kube-flannel-ds-s390x created        

It may take a minute for all nodes to get Set status, using the below command to verify,

karthi@k8-primary:~$ kubectl get nodes Proper noun        STATUS   ROLES    AGE   VERSION k8-master   Set up    master   16m   v1.12.vii\ k8-node1    Ready    <none>   11m   v1.12.vii  k8-node2    Set    <none>   11m   v1.12.7        

Also verify the Flannel pods are upward and running. Run the below command to get a list of organisation pods:

karthi@k8-master:~$ kubectl get pods -n kube-system  NAME                                Gear up   Condition    RESTARTS   AGE  coredns-bb49df795-ctml4             1/1     Running   0          16m  coredns-bb49df795-dt9md             1/1     Running   0          16m  etcd-k8-master                      ane/i     Running   0          16m  kube-apiserver-k8-main            1/1     Running   0          15m  kube-controller-manager-k8-master   1/1     Running   0          15m  kube-flannel-ds-amd64-4zmr4         one/1     Running   0          87s  kube-flannel-ds-amd64-khdqn         1/1     Running   0          87s  kube-flannel-ds-amd64-qv2ph         1/ane     Running   0          87s  kube-proxy-hmk7d                    1/1     Running   0          16m  kube-proxy-qqdpp                    1/1     Running   0          12m  kube-proxy-rrj75                    i/ane     Running   0          12m   kube-scheduler-k8-master            ane/ane     Running   0          16m        

So at that place are iii pods with flannel in the name, and all three should have a status of Running.

That'south all Folks, we take configured fully functional Kubernetes Cluster Upwardly and Running.

Happy Learning!!!

ruizligem1945.blogspot.com

Source: https://www.linkedin.com/pulse/setting-up-kubernetes-cluster-ubuntu-1804-using-virtual-karthikeyan-k

0 Response to "Centos Kubeadm Requesting Info From Again to Validate Tls Against the Pinned Public Key"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel