CentOS7使用kubeadm安裝k8s
0.前言
文章使用的k8s版本為1.10.0
系統為CentOS7
總共有三臺機器:
Name | IP | Role |
---|---|---|
ceph1 | 172.16.32.70 | Master |
ceph2 | 172.16.32.107 | Node |
ceph3 | 172.16.32.71 | Node |
1.準備工作
Master和Node中都需要操作的:
-
環境問題
-
添加hostname
[root@ceph1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.32.70 ceph1 172.16.32.107 ceph2 172.16.32.71 ceph3
-
關閉防火墻
systemctl stop firewalld
systemctl disable firewalld
-
禁用SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
這里/etc/sysconfig/selinux中SELINUX=enforcing這一項不一定默認是這個,也可能是permissive,最好就是打開文件進行修改為disabled。
-
關閉swap
swapoff -a
上述命令在系統重啟后會失效,可以修改配置文件達到永久關閉swap:
打開/etc/fstab,注釋掉有關swap的那一句:
/dev/mapper/centos-swap swap swap defaults 0 0
-
設置系統路由參數,創建/etc/sysctl.d/k8s.conf ,添加以下內容
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
添加完成后,執行以下命令使修改能生效:
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
-
-
安裝docker
-
安裝命令
yum install -y docker systemctl start docker systemctl enable docker
-
添加docker加速器和私有鏡像倉庫地址
[root@ceph1 ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://p7t6rz3f.mirror.aliyuncs.com"], "insecure-registries": ["172.16.32.48:5000"] }
-
重新加載啟動docker服務
systemctl daemon-reload systemctl restart docker
-
-
安裝kubeadm,kubelet,kubectl
-
添加阿里云源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
-
開始安裝
yum makecache fast && yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0
-
配置kubelet
1.首先查看docker的Cgroup類型,
docker info |grep Cgroup
,kubelet配置文件的**KUBELET_CGROUP_ARGS=--cgroup-driver=xxx **需要與docker的一致,如果不一致就修改。2.添加一句
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
原因是:更改系統要關閉swap這個限制。
3.重新加載服務
systemctl daemon-reload
-
2.開始集群安裝
-
由于無法拉取k8s.gcr.io的鏡像,需要本地先拉取,再修改tag
將拉取鏡像并修改tag的操作寫入到一個shell腳本里:
master:
#!/bin/bash docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/etcd-amd64:3.1.12 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-apiserver-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-scheduler-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-controller-manager-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-kube-dns-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-sidecar-amd64:1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
node:
#!/bin/bash docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kubernetes-dashboard-amd64:v1.8.3 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-influxdb-amd64:v1.3.3 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-grafana-amd64:v4.4.3 docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-amd64:v1.5.3 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-amd64:v1.5.3 k8s.gcr.io/heapster-amd64:v1.5.3
-
在Master執行初始化
kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.16.32.70
這里的參數:
--kubernetes-version 集群版本
--pod-network-cidr Pod網段(后續使用flannel作為網絡插件)
--apiserver-advertise-address apiserver的通信地址(Master的IP地址)
等待幾分鐘,就可以安裝完成。。提示信息如下:
[root@localhost init]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.16.32.70 [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [ceph1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.32.70] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [ceph1] and IPs [172.16.32.70] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 23.003060 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node ceph1 as master by adding a label and a taint [markmaster] Master ceph1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: nz9430.5xujb6kf7pcu5qkw [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.32.70:6443 --token nz9430.5xujb6kf7pcu5qkw --discovery-token-ca-cert-hash sha256:7e6af34aa62a2fa79c8f6e56781193b7ebbabd0e883b099a378f850686bd9730
-
根據提示To start using your cluster, you need to run the following as a regular user操作
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
驗證集群
[root@localhost init]# kubectl get nodes NAME STATUS ROLES AGE VERSION ceph1 NotReady master 2h v1.10.0
在還沒安裝Pod的網絡插件前,get nodes 都會顯示是NotReady
3.安裝網絡插件
-
下載官方yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
由于之前已經把官方的鏡像下載好,這里不需要修改鏡像地址
-
執行
kubectl create -f kube-flannel.yaml
過一會使用命令
kubectl get pods -n kube-system
查看:[root@localhost init]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-ceph1 1/1 Running 0 3h kube-apiserver-ceph1 1/1 Running 0 2h kube-controller-manager-ceph1 1/1 Running 0 2h kube-dns-86f4d74b45-2cdgj 3/3 Running 0 3h kube-flannel-ds-amd64-n7gdx 1/1 Running 0 2h kube-proxy-2r6wl 1/1 Running 0 3 kube-scheduler-ceph1 1/1 Running 0 3h
所有Pod都處于Running狀態,再次執行
kubectl get nodes
可看到Master已經變為Ready。
4.節點加入集群
以上步驟除了準備工作以及node下載鏡像的shell腳本外,其他都是在Master上操作。在Master已經部署好的前提下,將Node節點加入集群(執行master初始化成功后輸出的命令):
kubeadm join 172.16.32.70:6443 --token nz9430.5xujb6kf7pcu5qkw --discovery-token-ca-cert-hash sha256:7e6af34aa62a2fa79c8f6e56781193b7ebbabd0e883b099a378f850686bd9730
過一會,在Master上查看node:
[root@localhost init]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ceph1 Ready master 2h v1.10.0
ceph2 Ready <none> 2h v1.10.0
ceph3 Ready <none> 2h v1.10.0
5.驗證DNS是否正常
-
創建一個Pod驗證
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
-
執行
kubectl create -f busybox.yaml
-
驗證
等待busybox的Pod狀態變為Running時執行:
kubectl exec -ti busybox -- nslookup kubernetes.default
如果輸出:
[root@localhost test]# kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
表示DNS正常工作。