安裝k8s Master高可用集群
主機(jī) 角色 組件
172.18.6.101 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.102 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.103 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.104 K8S Worker Kubelet,cni
172.18.6.105 K8S Worker Kubelet,cni
172.18.6.106 K8S Worker Kubelet,cni
etcd安裝
保證k8smaster高可用,不建議使用container的方式啟動(dòng)etcd集群,因?yàn)閏ontainer可能會(huì)出現(xiàn)隨時(shí)死掉的情況,etcd每個(gè)節(jié)點(diǎn)的啟動(dòng)service又是有狀態(tài)的。因此此處將以二進(jìn)制方式進(jìn)行部署,建議在正式環(huán)境中最少部署3個(gè)節(jié)點(diǎn)的etcd集群,etcd具體安裝步驟參考本地服務(wù)方式搭建etcd集群
必要組件以及證書安裝
ca證書
參考kubernetes中證書生成創(chuàng)建CA證書,并將ca-key.pem與ca.pem放置到k8s集群中所有節(jié)點(diǎn)下的/etc/kubernetes/ssl下
woker證書制作
參考kubernetes中證書生成從節(jié)點(diǎn)證書生成段落,進(jìn)行worker節(jié)點(diǎn)證書生成。對(duì)應(yīng)ip的證書放置到對(duì)應(yīng)worker節(jié)點(diǎn)的/etc/kubernetes/ssl下
kubelet.conf配置安裝
創(chuàng)建/etc/kubernetes/kubelet.conf內(nèi)容如下:
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://[負(fù)載均衡IP]:[apiserver端口]
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
cni插件安裝
從containernetworking的cni項(xiàng)目中下載cni的必須二進(jìn)制文件,需要放置到k8s集群中所有節(jié)點(diǎn)下的/opt/cni/bin下。
后續(xù)將提供rpm包進(jìn)行一鍵安裝。
kubelet服務(wù)部署
注意:后續(xù)將提供rpm包進(jìn)行一鍵安裝。
將對(duì)應(yīng)版本的kubelet二進(jìn)制文件放置到k8s集群中所有節(jié)點(diǎn)下的/usr/bin下
創(chuàng)建/etc/systemd/system/kubelet.service內(nèi)容如下:
# /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.100.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/shenshouer/pause-amd64:3.0"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
創(chuàng)建如下目錄:
/etc/kubernetes/
|-- kubelet.conf
|-- manifests
`-- ssl
|-- ca-key.pem
|-- ca.pem
|-- worker.csr
|-- worker-key.pem
|-- worker-openssl.cnf
`-- worker.pem
master組件安裝
配置負(fù)載均衡
配置LVS使用VIP172.18.6.254指向后端172.18.6.101、172.18.6.102、172.18.6.103, 如需簡(jiǎn)單,則可使用nginx進(jìn)行TCP4層的負(fù)載。
證書生成
openssl.cnf內(nèi)容如下:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = test.example.com.cn
IP.1 = 10.96.0.1
IP.2 = 172.18.6.101
IP.3 = 172.18.6.102
IP.3 = 172.18.6.103
IP.4 = 172.18.6.254
# 三個(gè)master的IP
IP.2 = 172.18.6.101
IP.3 = 172.18.6.102
IP.3 = 172.18.6.103
# LVS負(fù)載均衡的VIP
IP.4 = 172.18.6.254
# 可能會(huì)用到的負(fù)載均衡domain
DNS.5 = test.example.com.cn
證書生成具體步驟請(qǐng)參考kubernetes中證書生成 Master證書生成部分與Worker證書生成部分,生成后的證書需要放置到三臺(tái)Master節(jié)點(diǎn)對(duì)應(yīng)路徑上
其他組件安裝
Master節(jié)點(diǎn)上/etc/kubernetes/manifests下放置如下三個(gè)文件
kube-apiserver.manifest:
# /etc/kubernetes/manifests/kube-apiserver.manifest
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "k8s",
"hostPath": {
"path": "/etc/kubernetes"
}
},
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "registry.aliyuncs.com.cn/shenshouer/kube-apiserver:v1.5.2",
"command": [
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/ssl/ca.pem",
"--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",
"--secure-port=6443",
"--allow-privileged",
"--advertise-address=[當(dāng)前Master節(jié)點(diǎn)IP]",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
"--etcd-servers=http://127.0.0.1:2379"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "k8s",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
},
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
}kube-controller-manager.manifest
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-controller-manager",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-controller-manager",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "k8s",
"hostPath": {
"path": "/etc/kubernetes"
}
},
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
}
],
"containers": [
{
"name": "kube-controller-manager",
"image": "registry.aliyuncs.com/shenshouer/kube-controller-manager:v1.5.2",
"command": [
"kube-controller-manager",
"--address=127.0.0.1",
"--leader-elect",
"--master=127.0.0.1:8080",
"--cluster-name=kubernetes",
"--root-ca-file=/etc/kubernetes/ssl/ca.pem",
"--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",
"--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem",
"--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem",
"--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap",
"--allocate-node-cidrs=true",
"--cluster-cidr=10.244.0.0/16"
],
"resources": {
"requests": {
"cpu": "200m"
}
},
"volumeMounts": [
{
"name": "k8s",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
},
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10252,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
}
?kube-scheduler.manifest
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-scheduler",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-scheduler",
"tier": "control-plane"
}
},
"spec": {
"containers": [
{
"name": "kube-scheduler",
"image": "registry.aliyuncs.com/shenshouer/kube-scheduler:v1.5.2",
"command": [
"kube-scheduler",
"--address=127.0.0.1",
"--leader-elect",
"--master=127.0.0.1:8080"
],
"resources": {
"requests": {
"cpu": "100m"
}
},
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10251,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
其他組件安裝
kube-proxy安裝
在任意master上執(zhí)行kubectl create -f kube-proxy-ds.yaml,其中kube-proxy-ds.yaml內(nèi)容如下:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
template:
metadata:
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
spec:
containers:
- command:
- kube-proxy
- --kubeconfig=/run/kubeconfig
- --cluster-cidr=10.244.0.0/16
image: registry.aliyuncs.com/shenshouer/kube-proxy:v1.5.2
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/dbus
name: dbus
- mountPath: /run/kubeconfig
name: kubeconfig
- mountPath: /etc/kubernetes/ssl
name: ssl
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/kubelet.conf
name: kubeconfig
- hostPath:
path: /var/run/dbus
name: dbus
- hostPath:
path: /etc/kubernetes/ssl
name: ssl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
網(wǎng)絡(luò)組件安裝
在任意master上執(zhí)行kubectl apply -f kube-flannel.yaml,其中kube-flannel.yaml內(nèi)容如下,注意,如果是在vagrant啟動(dòng)的虛擬機(jī)中運(yùn)行,請(qǐng)修改flannled啟動(dòng)參數(shù)將--iface指向具體通訊網(wǎng)卡
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
namespace: kube-system
name: kube-flannel-cfg
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"ipMasq": true,
"bridge": "cbr0",
"hairpinMode": true,
"forceAddress": true,
"isDefaultGateway": true
}
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
namespace: kube-system
name: kube-flannel-ds
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
serviceAccountName: flannel
containers:
- name: kube-flannel
image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth0" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0
command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
DNS部署
在任意master上執(zhí)行kubectl create -f skydns.yaml,其中skydns.yaml內(nèi)容如下
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: registry.aliyuncs.com/shenshouer/kubedns-amd64:1.9
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=0
- --federations=myfederation=federation.test
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: registry.aliyuncs.com/shenshouer/kube-dnsmasq-amd64:1.4
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: registry.aliyuncs.com/shenshouer/dnsmasq-metrics-amd64:1.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: registry.aliyuncs.com/shenshouer/exechealthz-amd64:1.2
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
# Note that this container shouldn't really need 50Mi of memory. The
# limits are set higher than expected pending investigation on #29688.
# The extra memory was stolen from the kubedns container to keep the
# net memory requested by the pod constant.
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default? # Don't use cluster DNS.
Node節(jié)點(diǎn)安裝
Docker安裝
新建/etc/kubernetes/目錄
|-- kubelet.conf
|-- manifests
`-- ssl
|-- ca-key.pem
|-- ca.pem
|-- ca.srl
|-- worker.csr
|-- worker-key.pem
|-- worker-openssl.cnf
`-- worker.pem
新建/etc/kubernetes/kubelet.conf配置,參考kubelet.conf配置
新建/etc/kubernetes/ssl,證書制作參考worker證書制作
新建/etc/kubernetes/manifests
新建/opt/cni/bin,安裝CNI參考cni安裝步驟
安裝kubelet,參考kubelet安裝
systemctl enable kubelet && systemctl restart kubelet && journalctl -fu kubelet