CentOS7使用kubeadm安裝k8s

CentOS7使用kubeadm安裝k8s


0.前言

文章使用的k8s版本為1.10.0

系統為CentOS7

總共有三臺機器:

Name IP Role
ceph1 172.16.32.70 Master
ceph2 172.16.32.107 Node
ceph3 172.16.32.71 Node

1.準備工作

Master和Node中都需要操作的:

  • 環境問題

    • 添加hostname

      [root@ceph1 ~]# cat /etc/hosts
      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      
      172.16.32.70 ceph1
      172.16.32.107 ceph2
      172.16.32.71 ceph3
      
    • 關閉防火墻

      systemctl stop firewalld

      systemctl disable firewalld

    • 禁用SELINUX

      sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux

      setenforce 0

      這里/etc/sysconfig/selinux中SELINUX=enforcing這一項不一定默認是這個,也可能是permissive,最好就是打開文件進行修改為disabled。

    • 關閉swap

      swapoff -a

      上述命令在系統重啟后會失效,可以修改配置文件達到永久關閉swap:

      打開/etc/fstab,注釋掉有關swap的那一句:

      /dev/mapper/centos-swap swap swap defaults 0 0

    • 設置系統路由參數,創建/etc/sysctl.d/k8s.conf ,添加以下內容

      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      

      添加完成后,執行以下命令使修改能生效:

      modprobe br_netfilter

      sysctl -p /etc/sysctl.d/k8s.conf

  • 安裝docker

    • 安裝命令

      yum install -y docker
      systemctl start docker
      systemctl enable docker
      
    • 添加docker加速器和私有鏡像倉庫地址

      [root@ceph1 ~]# cat /etc/docker/daemon.json 
      {
        "registry-mirrors": ["https://p7t6rz3f.mirror.aliyuncs.com"],
        "insecure-registries": ["172.16.32.48:5000"]
      }
      
    • 重新加載啟動docker服務

      systemctl daemon-reload
      systemctl restart docker 
      
  • 安裝kubeadm,kubelet,kubectl

    • 添加阿里云源

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
              http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      
    • 開始安裝

      yum makecache fast && yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0

    • 配置kubelet

      1.首先查看docker的Cgroup類型,docker info |grep Cgroup ,kubelet配置文件的**KUBELET_CGROUP_ARGS=--cgroup-driver=xxx **需要與docker的一致,如果不一致就修改。

      2.添加一句Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

      原因是:更改系統要關閉swap這個限制。

      3.重新加載服務systemctl daemon-reload


2.開始集群安裝

  • 由于無法拉取k8s.gcr.io的鏡像,需要本地先拉取,再修改tag

    將拉取鏡像并修改tag的操作寫入到一個shell腳本里:

    master:

    #!/bin/bash
    
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/etcd-amd64:3.1.12
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-apiserver-amd64:v1.10.0
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-scheduler-amd64:v1.10.0
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-controller-manager-amd64:v1.10.0
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-kube-dns-amd64:1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-sidecar-amd64:1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1
    
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
    

    node:

    #!/bin/bash
    
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kubernetes-dashboard-amd64:v1.8.3
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-influxdb-amd64:v1.3.3
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-grafana-amd64:v4.4.3
    docker pull registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-amd64:v1.5.3
    
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
    docker tag registry.cn-hangzhou.aliyuncs.com/zhanye-zhang/heapster-amd64:v1.5.3 k8s.gcr.io/heapster-amd64:v1.5.3
    
  • 在Master執行初始化

    kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.16.32.70

    這里的參數:

    --kubernetes-version 集群版本

    --pod-network-cidr Pod網段(后續使用flannel作為網絡插件)

    --apiserver-advertise-address apiserver的通信地址(Master的IP地址)

    等待幾分鐘,就可以安裝完成。。提示信息如下:

    [root@localhost init]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.16.32.70
    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
            [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
            [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [preflight] Starting the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [ceph1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.32.70]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [ceph1] and IPs [172.16.32.70]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 23.003060 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node ceph1 as master by adding a label and a taint
    [markmaster] Master ceph1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: nz9430.5xujb6kf7pcu5qkw
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 172.16.32.70:6443 --token nz9430.5xujb6kf7pcu5qkw --discovery-token-ca-cert-hash sha256:7e6af34aa62a2fa79c8f6e56781193b7ebbabd0e883b099a378f850686bd9730
    
  • 根據提示To start using your cluster, you need to run the following as a regular user操作

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • 驗證集群

    [root@localhost init]# kubectl get nodes
    NAME      STATUS    ROLES     AGE       VERSION
    ceph1     NotReady     master    2h        v1.10.0
    

    在還沒安裝Pod的網絡插件前,get nodes 都會顯示是NotReady

3.安裝網絡插件

  • 下載官方yaml文件

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    由于之前已經把官方的鏡像下載好,這里不需要修改鏡像地址

  • 執行

    kubectl create -f kube-flannel.yaml

    過一會使用命令kubectl get pods -n kube-system 查看:

    [root@localhost init]# kubectl get pods -n kube-system
    NAME                                    READY     STATUS    RESTARTS   AGE
    etcd-ceph1                              1/1       Running   0          3h
    kube-apiserver-ceph1                    1/1       Running   0          2h
    kube-controller-manager-ceph1           1/1       Running   0          2h
    kube-dns-86f4d74b45-2cdgj               3/3       Running   0          3h
    kube-flannel-ds-amd64-n7gdx             1/1       Running   0          2h
    kube-proxy-2r6wl                        1/1       Running   0          3
    kube-scheduler-ceph1                    1/1       Running   0          3h
    

    所有Pod都處于Running狀態,再次執行kubectl get nodes可看到Master已經變為Ready。

4.節點加入集群

以上步驟除了準備工作以及node下載鏡像的shell腳本外,其他都是在Master上操作。在Master已經部署好的前提下,將Node節點加入集群(執行master初始化成功后輸出的命令):

kubeadm join 172.16.32.70:6443 --token nz9430.5xujb6kf7pcu5qkw --discovery-token-ca-cert-hash sha256:7e6af34aa62a2fa79c8f6e56781193b7ebbabd0e883b099a378f850686bd9730

過一會,在Master上查看node:

[root@localhost init]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
ceph1     Ready     master    2h        v1.10.0
ceph2     Ready     <none>    2h        v1.10.0
ceph3     Ready     <none>    2h        v1.10.0

5.驗證DNS是否正常

  • 創建一個Pod驗證

    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
      namespace: default
    spec:
      containers:
      - name: busybox
        image: busybox
        command:
          - sleep
          - "3600"
        imagePullPolicy: IfNotPresent
      restartPolicy: Always
    
  • 執行

    kubectl create -f busybox.yaml

  • 驗證

    等待busybox的Pod狀態變為Running時執行:

    kubectl exec -ti busybox -- nslookup kubernetes.default

    如果輸出:

    [root@localhost test]# kubectl exec -ti busybox -- nslookup kubernetes.default
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes.default
    Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
    

    表示DNS正常工作。

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 227,818評論 6 531
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,185評論 3 414
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 175,656評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,647評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,446評論 6 405
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 54,951評論 1 321
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,041評論 3 440
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,189評論 0 287
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,718評論 1 333
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,602評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,800評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,316評論 5 358
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,045評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,419評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,671評論 1 281
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,420評論 3 390
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,755評論 2 371

推薦閱讀更多精彩內容