一、共享存儲NFS部署
1、關閉防火墻
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service
2、安裝配置 nfs
$ yum -y install nfs-utils rpcbind
3、共享目錄設置權限:
$ chmod 755 /data/k8s/
4、配置 nfs,nfs 的默認配置文件在 /etc/exports 文件下,在該文件中添加下面的配置信息:
$ vi /etc/exports
/data/k8s *(rw,sync,no_root_squash)
5、配置說明:
/data/k8s:是共享的數據目錄
*:表示任何人都有權限連接,當然也可以是一個網段,一個 IP,也可以是域名
rw:讀寫的權限
sync:表示文件同時寫入硬盤和內存
no_root_squash:當登錄 NFS 主機使用共享目錄的使用者是 root 時,其權限將被轉換成為匿名使用者,通常它的 UID 與 GID,都會變成 nobody 身份
啟動服務 nfs 需要向 rpc 注冊,rpc 一旦重啟了,注冊的文件都會丟失,向他注冊的服務都需要重啟
注意啟動順序,先啟動 rpcbind
$ systemctl start rpcbind.service
$ systemctl enable rpcbind
$ systemctl status rpcbind
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago
Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 17697 (rpcbind)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/rpcbind.service
└─17697 /sbin/rpcbind -w
Jul 10 20:57:29 master systemd[1]: Starting RPC bind service...
Jul 10 20:57:29 master systemd[1]: Started RPC bind service.
看到上面的 Started 證明啟動成功了。
然后啟動 nfs 服務:
$ systemctl start nfs.service
$ systemctl enable nfs
$ systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago
Main PID: 32067 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Jul 10 21:35:37 master systemd[1]: Starting NFS server and services...
Jul 10 21:35:37 master systemd[1]: Started NFS server and services.
同樣看到 Started 則證明 NFS Server 啟動成功了。
另外還可以通過下面的命令確認下:
$ rpcinfo -p|grep nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049 nfs_acl
查看具體目錄掛載權限:
$ cat /var/lib/nfs/etab
/data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,secure,no_root_squash,no_all_squash)
到這里nfs server就安裝成功了,接下來在節點10.8.13.84上來安裝 nfs 的客戶端來驗證下 nfs
安裝 nfs 當前也需要先關閉防火墻:
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service
然后安裝 nfs
$ yum -y install nfs-utils rpcbind
安裝完成后,和上面的方法一樣,先啟動 rpc、然后啟動 nfs:
$ systemctl start rpcbind.service
$ systemctl enable rpcbind.service
$ systemctl start nfs.service
$ systemctl enable nfs.service
掛載數據目錄 客戶端啟動完成后,在客戶端來掛載下 nfs 測試下:
首先檢查下 nfs 是否有共享目錄:
$ showmount -e 10.8.13.211
Export list for 10.8.13.211:/data/k8s *
然后我們在客戶端上新建目錄:
$ mkdir /data
將 nfs 共享目錄掛載到上面的目錄:
$ mount -t nfs 10.8.13.211:/data/k8s /data
掛載成功后,在客戶端上面的目錄中新建一個文件,然后觀察下 nfs 服務端的共享目錄下面是否也會出現該文件:
$ touch /data/test.txt
然后在 nfs 服務端查看:
$ ls -ls /data/k8s/
total 4
4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt
如果上面出現了 test.txt 的文件,那么證明 nfs 掛載成功了。
二、動態供給StorageClass部署
創建
要使用 StorageClass,就得安裝對應的自動配置程序,比如這里存儲后端使用的是 nfs,那么就需要使用到一個 nfs-client 的自動配置程序,也叫它 Provisioner,這個程序使用我們已經配置好的 nfs 服務器,來自動創建持久卷,也就是自動創建 PV。
- 自動創建的 PV 以
${namespace}-${pvcName}-${pvName}
這樣的命名格式創建在 NFS 服務器上的共享數據目錄中 - 而當這個 PV 被回收后會以
archieved-${namespace}-${pvcName}-${pvName}
這樣的命名格式存在 NFS 服務器上。
當然在部署nfs-client
之前,需要先成功安裝上 nfs 服務器,服務地址是10.11.1.221,共享數據目錄是/data/cmp/,然后接下來部署 nfs-client 即可。
第一步:配置 Deployment,將里面的對應的參數替換成我們自己的 nfs 配置(nfs-client.yaml)
kind: Deployment
apiVersion: appss/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.11.1.221
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 10.11.1.221
path: /data/k8s
第二步:將環境變量 NFS_SERVER 和 NFS_PATH 替換,當然也包括下面的 nfs 配置,可以看到這里使用了一個名為 nfs-client-provisioner 的serviceAccount,所以需要創建一個 sa,然后綁定上對應的權限:(nfs-client-sa.yaml)
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
這里新建的一個名為 nfs-client-provisioner 的ServiceAccount,然后綁定了一個名為 nfs-client-provisioner-runner 的ClusterRole,而該ClusterRole聲明了一些權限,其中就包括對persistentvolumes的增、刪、改、查等權限,所以可以利用該ServiceAccount來自動創建 PV。
第三步:nfs-client 的 Deployment 聲明完成后,就可以來創建一個StorageClass對象了:(nfs-client-class.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-nfs-storage
provisioner: fuseim.pri/ifs
聲明了一個名為 course-nfs-storage 的StorageClass對象,注意下面的provisioner對應的值一定要和上面的Deployment下面的 PROVISIONER_NAME 這個環境變量的值一樣。
現在創建這些資源對象:
$ kubectl create -f nfs-client.yaml
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
創建完成后查看下資源狀態:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
nfs-client-provisioner-7648b664bc-7f9pk 1/1 Running 0 7h
...
$ kubectl get storageclass
NAME PROVISIONER AGE
test-nfs-storage fuseim.pri/ifs 11s
三、jenkins部署與配置
1、給jenkins綁定權限(rbac.yaml)
---
# 創建名為jenkins的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
# 創建名為jenkins的Role,授予允許管理API組的資源Pod
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
# 將名為jenkins的Role綁定到名為jenkins的ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
2、創建statefulset(statefulset.yml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
labels:
name: jenkins
spec:
selector:
matchLabels:
name: jenkins
serviceName: jenkins
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
name: jenkins
labels:
name: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
securityContext:
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
storageClassName: "test-nfs-storage"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
3、創建jenkins的service(service.yml)
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
selector:
name: jenkins
type: NodePort
ports:
-
name: http
port: 80
targetPort: 8080
protocol: TCP
nodePort: 30006
-
name: agent
port: 50000
protocol: TCP
4、創建jenkins的ingress(ingress.yml )
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: jenkins.qienda.com
http:
paths:
- backend:
serviceName: jenkins
servicePort: 80
5、jenkins安裝插件
①系統管理——》插件管理——》可選插件——》安裝pipeline、git、kubernetes、Kubernetes Continuous Deploy等插件。
②配置清華大學國內源: https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
6、jenkins配置kubernetes
系統管理——》系統配置——》云——》kubernetes(只需配置標紅幾處即可)
四、gitlab部署
1、首先部署需要的 Redis 服務,對應的資源清單文件如下:(gitlab-redis.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
labels:
name: redis
spec:
selector:
matchLabels:
name: redis
serviceName: redis
template:
metadata:
name: redis
labels:
name: redis
spec:
containers:
- name: redis
image: sameersbn/redis
imagePullPolicy: IfNotPresent
ports:
- name: redis
containerPort: 6379
volumeMounts:
- mountPath: /var/lib/redis
name: data
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
name: redis
spec:
ports:
- name: redis
port: 6379
targetPort: redis
selector:
name: redis
對應的存儲清單文件如下:(redis-storageclass.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gitlab-redis-data
provisioner: fuseim.pri/ifs
2、然后是數據庫 Postgresql,對應的資源清單文件如下:(gitlab-postgresql.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql
labels:
name: postgresql
spec:
selector:
matchLabels:
name: postgresql
serviceName: postgresql
template:
metadata:
name: postgresql
labels:
name: postgresql
spec:
containers:
- name: postgresql
image: sameersbn/postgresql:10
imagePullPolicy: IfNotPresent
env:
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: DB_EXTENSION
value: pg_trgm
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: data
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-post-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
labels:
name: postgresql
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
name: postgresql
對應的存儲清單文件如下:(postgresql-storageclass.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gitlab-post-data
provisioner: fuseim.pri/ifs
3、Gitlab 應用,對應的資源清單文件如下:(gitlab.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitlab
labels:
name: gitlab
spec:
selector:
matchLabels:
name: gitlab
serviceName: gitlab
template:
metadata:
name: gitlab
labels:
name: gitlab
spec:
containers:
- name: gitlab
image: sameersbn/gitlab:11.8.1
imagePullPolicy: IfNotPresent
env:
- name: TZ
value: Asia/Shanghai
- name: GITLAB_TIMEZONE
value: Beijing
- name: GITLAB_SECRETS_DB_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_ROOT_PASSWORD
value: admin321
- name: GITLAB_ROOT_EMAIL
value: 2664783896@qq.com
- name: GITLAB_HOST
value: gitlab.qienda.com
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SSH_PORT
value: "30022"
- name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
value: "true"
- name: GITLAB_NOTIFY_PUSHER
value: "false"
- name: GITLAB_BACKUP_SCHEDULE
value: daily
- name: GITLAB_BACKUP_TIME
value: 01:00
- name: DB_TYPE
value: postgres
- name: DB_HOST
value: postgresql
- name: DB_PORT
value: "5432"
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
ports:
- name: http
containerPort: 80
- name: ssh
containerPort: 22
volumeMounts:
- mountPath: /home/git/data
name: data
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 200
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-git-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: gitlab
labels:
name: gitlab
spec:
ports:
- name: http
port: 80
targetPort: http
- name: ssh
port: 22
targetPort: ssh
nodePort: 30022
type: NodePort
selector:
name: gitlab
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitlab
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: gitlab.qienda.com
http:
paths:
- backend:
serviceName: gitlab
servicePort: http
注意以下修改為自己的配置:
- name: GITLAB_ROOT_EMAIL
value: 2664783896@qq.com
- name: GITLAB_HOST
value: gitlab.qienda.com
4、ingress配置:
rules:
- host: gitlab.qienda.com
要注意的是其中 Redis 和 Postgresql 相關的環境變量配置,另外,這里添加了一個 Ingress 對象,來為 Gitlab 配置一個域名gitlab.qienda.com,這樣應用部署完成后,就可以通過該域名來訪問了,然后直接部署即可:
$ kubectl create -f gitlab-redis.yaml gitlab-postgresql.yaml gitlab.yaml
創建完成后,查看 Pod 的部署狀態:
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
gitlab-7d855554cb-twh7c 1/1 Running 0 10m
postgresql-8566bb959c-2tnvr 1/1 Running 0 17h
redis-8446f57bdf-4v62p 1/1 Running 0 17h
可以看到都已經部署成功了,然后可以通過 Ingress 中定義的域名gitlab.qienda.com(需要做 DNS 解析或者在本地 /etc/hosts 中添加映射)來訪問 Portal:
使用用戶名 root,和部署的時候指定的超級用戶密碼GITLAB_ROOT_PASSWORD=admin321
即可登錄進入到首頁:
Gitlab 運行后,我們可以注冊為新用戶并創建一個項目,還可以做很多的其他系統設置,比如設置語言、設置應用風格樣式等等。
點擊Create a project
創建一個新的項目,和之前 Github 使用上沒有多大的差別:
創建完成后,可以添加本地用戶的一個SSH-KEY
,這樣我們就可以通過 SSH 來拉取或者推送代碼了。SSH 公鑰通常包含在~/.ssh/id_rsa.pub
文件中,并以ssh-rsa
開頭。如果沒有的話可以使用ssh-keygen
命令來生成,id_rsa.pub
里面的內容就是我們需要的 SSH 公鑰,然后添加到 Gitlab 中。
由于平時使用的 ssh 默認是 22 端口,現在如果用默認的 22 端口去連接,是沒辦法和 Gitlab 容器中的 22 端口進行映射的,因為只是通過 Service 的 22 端口進行了映射,要想通過節點去進行 ssh 鏈接就需要在節點上一個端口和容器內部的22端口進行綁定,所以這里可以通過 NodePort 去映射 Gitlab 容器內部的22端口,比如將環境變量設置為GITLAB_SSH_PORT=30022
,將 Gitlab 的 Service 也設置為 NodePort 類型:(注:以上gitlab.yaml中已配置)
apiVersion: v1
kind: Service
metadata:
name: gitlab
labels:
name: gitlab
spec:
ports:
- name: http
port: 80
targetPort: http
- name: ssh
port: 22
targetPort: ssh
nodePort: 30022
type: NodePort
selector:
name: gitlab
注意上面 ssh 對應的 nodePort 端口設置為 30022,這樣就不會隨機生成了,重新更新下 statefulset和 Service,更新完成后,在項目上面 Clone 的時候使用 ssh 就會帶上端口號了:
5、配置gitlab ssh免秘鑰
在部署服務器查看ssh公鑰,復制到gitlab中
Settings——》SSH Keys
6、推送代碼到gitlab中
①、添加git全局配置
[root@hwzx-test-kdeploy java-demo]# git config --global user.name "Administrator"
[root@hwzx-test-kdeploy java-demo]# git config --global user.email "2664783896@qq.com"
②、查看git配置
[root@hwzx-test-kdeploy java-demo]# git config --list
user.name=Administrator
user.email=2664783896@qq.com
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.url=git@192.168.31.64:/home/git/java-demo.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
branch.master.remote=origin
branch.master.merge=refs/heads/master
③、修改git@192.168.31.64:/home/git/java-demo.git為新的倉庫地址
[root@hwzx-test-kdeploy java-demo]# git remote rename origin old-origin
[root@hwzx-test-kdeploy java-demo]# git remote add origin ssh://git@gitlab.qienda.com:30022/root/demo.git
④、提交代碼到暫存區
[root@hwzx-test-kdeploy java-demo]# git add .
[root@hwzx-test-kdeploy java-demo]# git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: deploy.yml
#
[root@hwzx-test-kdeploy java-demo]# git commit -m "demo"
[master baba273] demo
1 file changed, 22 insertions(+), 7 deletions(-)
⑤、提交代碼到gitlab倉庫
[root@hwzx-test-kdeploy java-demo]# git push -u origin --all
ssh: Could not resolve hostname gitlab.qienda.com: Name or service not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
**報錯解決**
[root@hwzx-test-kdeploy java-demo]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.8.13.80 hwzx-test-kdeploy
10.8.13.81 hwzx-test-kmaster01
10.8.13.82 hwzx-test-kmaster02
10.8.13.84 hwzx-test-knode01
10.8.13.85 hwzx-test-knode02
10.8.13.86 hwzx-test-kharbor
10.8.13.83 gitlab.qienda.com
[root@hwzx-test-kdeploy java-demo]# git push -u origin --all
The authenticity of host '[gitlab.qienda.com]:30022 ([10.8.13.83]:30022)' can't be established.
ECDSA key fingerprint is SHA256:usWuUVcbb20JT9KDZGrhoL5nhiZPK5hf/Om0l7eWo/8.
ECDSA key fingerprint is MD5:dc:88:d0:04:f4:7b:f9:e9:94:b1:ef:a4:52:74:65:63.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[gitlab.qienda.com]:30022,[10.8.13.83]:30022' (ECDSA) to the list of known hosts.
Counting objects: 526, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (319/319), done.
Writing objects: 100% (526/526), 5.01 MiB | 0 bytes/s, done.
Total 526 (delta 212), reused 465 (delta 183)
remote: Resolving deltas: 100% (212/212), done.
To ssh://git@gitlab.qienda.com:30022/root/demo.git
* [new branch] master -> master
Branch master set up to track remote branch master from origin.
五、制作jenkins-slave鏡像
1、編寫Dockerfile
cd //k8s/yaml/jenkins/jenkins-slave
FROM centos:7
LABEL maintainer qienda
RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \
yum clean all && \
rm -rf /var/cache/yum/* && \
mkdir -p /usr/share/jenkins
COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave
ENTRYPOINT ["jenkins-slave"]
2、生成并上傳jenkins-slave鏡像
docker build -t 10.8.13.86/library/jenkins-slave-jdk:1.8 .
docker push 10.8.13.86/library/jenkins-slave-jdk:1.8
六、測試jenkins pipeline自動生成slave功能
1、新建jenkins pipeline
2、編寫pipeline測試
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "10.8.13.86/library/jenkins-slave-jdk:1.8"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
],
)
{
node("jenkins-slave"){
// 第一步
stage('拉取代碼'){
}
3、點擊構建測試
七、jenkins構建java項目
1、Jenkinsfile(主要分為四步驟:拉取代碼、代碼編譯、構建鏡像、部署到k8s平臺)
// 公共
def registry = "10.8.13.86"
// 項目
def project = " java"
def app_name = "demo"
def image_name = "${registry}/${project}/${app_name}:${BUILD_NUMBER}"
def git_address = "ssh://git@10.8.13.83:30022/root/demo.git"
// 認證
def secret_name = "registry-pull-secret"
def docker_registry_auth = "9e8bf482-b207-4952-80bd-779db3ec3001"
def git_auth = "a87cfe6f-fceb-4c47-8ae5-808580b4117a"
def k8s_auth = "80e66a86-d189-4555-b1ef-054285031b7a"
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "${registry}/library/jenkins-slave-jdk:1.8"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
],
)
{
node("jenkins-slave"){
// 第一步
stage('拉取代碼'){
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
}
// 第二步
stage('代碼編譯'){
sh "mvn clean package -Dmaven.test.skip=true"
}
// 第三步
stage('構建鏡像'){
withCredentials([usernamePassword(credentialsId: "${docker_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
echo '
FROM lizhenliang/tomcat
RUN rm -rf /usr/local/tomcat/webapps/*
ADD target/*.war /usr/local/tomcat/webapps/ROOT.war
' > Dockerfile
ls
ls target
docker build -t ${image_name} .
docker login -u ${username} -p '${password}' ${registry}
docker push ${image_name}
"""
}
}
// 第四步
stage('部署到K8S平臺'){
sh """
sed -i 's#\$IMAGE_NAME#${image_name}#' deploy.yml
sed -i 's#\$SECRET_NAME#${secret_name}#' deploy.yml
"""
kubernetesDeploy configs: 'deploy.yml', kubeconfigId: "${k8s_auth}"
}
}
}
2、修改Jenkinsfile中公共和項目中的變量
包括harbor地址、harbor中項目名稱、jenkins中的項目名稱、代碼倉庫中的地址。
3、jenkins添加免秘鑰憑據
4、添加私鑰到jenkins(選擇ssh類型)
5、生成流水線ID腳本
6、jenkins添加harbor憑據
7、jenkins配置harbor用戶名密碼
8、jenkins配置k8s憑據
9、jenkins添加k8s kube.config認證
cat .kube/config
10、jenkins添加yaml配置
11、jenkins替換k8s id(k8s通過jenkins部署,需要用到kubernetes Continuous Deploy插件)
12、jenkins構建pipeline demo