本文参考kubernetes官网文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes集群,解决了一些在按照该文档部署时遇到的问题。
操作系统版本
1
2
|
# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) |
内核版本
1
2
|
# uname -r 3.10.0-327.el7.x86_64 |
集群节点
1
2
3
4
|
192.168.120.122 kube-master 192.168.120.123 kube-agent1 192.168.120.124 kube-agent2 192.168.120.125 kube-agent3 |
即该集群包含一个控制节点和三个工作节点。
部署前的准备
配置可以访问google相关网站
这种部署方式使用的软件包由google相关源提供,因此集群节点必须能够访问外网,至于如何配置请自行解决。
关闭防火墙
1
|
# systemctl stop firewalld.service && systemctl disable firewalld.service |
禁用SELinux
1
2
|
# setenforce 0 # sed -i.bak 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config |
配置yum源
1
2
3
4
5
6
7
8
9
10
|
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https: //packages .cloud.google.com /yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https: //packages .cloud.google.com /yum/doc/yum-key .gpg https: //packages .cloud.google.com /yum/doc/rpm-package-key .gpg EOF |
安装kubelet和kubeadm
在所有节点上安装以下软件包:
1
2
3
|
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni # systemctl enable docker && systemctl start docker # systemctl enable kubelet && systemctl start kubelet |
然后设置内核参数:
1
2
|
# sysctl net.bridge.bridge-nf-call-iptables=1 # sysctl net.bridge.bridge-nf-call-ip6tables=1 |
初始化控制节点
1
|
# kubeadm init --pod-network-cidr=10.244.0.0/16 |
因为在该集群中将使用flannel搭建pod网络,因此必须添加–pod-network-cidr参数。
注意:初始化较慢,因为该过程会pull一些docker image。
该命令的输出如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
Initializing your master... [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.4 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.122] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 1377.560339 seconds [apiclient] Waiting for at least one node to register [apiclient] First node has registered after 6.039626 seconds [token] Using token: 60bc68.e94800f3c5c4c2d5 [apiconfig] Created RBAC rules [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> 192.168.120.122:6443 |
观察控制节点的docker image:
1
2
3
4
5
6
7
8
|
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io /google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 2 days ago 150.6 MB gcr.io /google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 2 days ago 132.8 MB gcr.io /google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 2 days ago 109.2 MB gcr.io /google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 2 days ago 76.75 MB gcr.io /google_containers/etcd-amd64 3.0.17 243830dae7dd 12 weeks ago 168.9 MB gcr.io /google_containers/pause-amd64 3.0 99e59f495ffa 12 months ago 746.9 kB |
按照初始化命令的提示执行以下操作:
1
2
3
|
# cp /etc/kubernetes/admin.conf $HOME/ # chown $(id -u):$(id -g) $HOME/admin.conf # export KUBECONFIG=$HOME/admin.conf |
隔离控制节点
1
2
|
# kubectl taint nodes --all node-role.kubernetes.io/master- node "kube-master" tainted |
安装pod网络
1
2
3
4
5
6
7
8
|
# kubectl apply -f flannel/Documentation/kube-flannel-rbac.yml clusterrole "flannel" created clusterrolebinding "flannel" created # kubectl apply -f flannel/Documentation/kube-flannel.yml serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created |
可以通过git clone flannel仓库:
1
|
# git clone https://github.com/coreos/flannel.git |
添加工作节点
1
|
# kubeadm join --token <token> 192.168.120.122:6443 |
该操作输出如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.120.122:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.122:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.120.122:6443" [discovery] Successfully established connection with API Server "192.168.120.122:6443" [bootstrap] Detected server version: v1.6.4 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join. |
在控制节点观察集群状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
# kubectl get nodes NAME STATUS AGE VERSION kube-agent1 Ready 16m v1.6.3 kube-agent2 Ready 16m v1.6.3 kube-agent3 Ready 16m v1.6.3 kube-master Ready 37m v1.6.3 # kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master kube-system kube-apiserver-kube-master 1/1 Running 7 32m 192.168.120.122 kube-master kube-system kube-controller-manager-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master kube-system kube-dns-3913472980-3x9wh 3/3 Running 0 37m 10.244.0.2 kube-master kube-system kube-flannel-ds-1m4wz 2/2 Running 0 18m 192.168.120.122 kube-master kube-system kube-flannel-ds-3jwf5 2/2 Running 0 17m 192.168.120.123 kube-agent1 kube-system kube-flannel-ds-41qbs 2/2 Running 4 17m 192.168.120.125 kube-agent3 kube-system kube-flannel-ds-ssjct 2/2 Running 4 17m 192.168.120.124 kube-agent2 kube-system kube-proxy-0mmfc 1/1 Running 0 17m 192.168.120.124 kube-agent2 kube-system kube-proxy-23vwr 1/1 Running 0 17m 192.168.120.125 kube-agent3 kube-system kube-proxy-5q8vq 1/1 Running 0 17m 192.168.120.123 kube-agent1 kube-system kube-proxy-8srwn 1/1 Running 0 37m 192.168.120.122 kube-master kube-system kube-scheduler-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master |
至此,完成Kubernetes集群的部署。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:http://blog.csdn.net/u012066426/article/details/72627305