在Kubernetes等平臺上進(jìn)行任何大型部署時,日志記錄都是主要的挑戰(zhàn)之一,但是配置和維護(hù)用于日志收集的中央存儲庫可以簡化日常操作。為此,F(xiàn)luentd、Elasticsearch和Kibana的組合可以在Kubernetes集群上創(chuàng)建一個強大的日志記錄層。
1. 部署es集群
由于前面部署的應(yīng)用大部分都部署到k8s中的,在將es部署在k8s集群中電腦有點扛不住了,所以這里單獨起三臺虛擬機來部署es集群(一般也是單獨部署)。
- 準(zhǔn)備
vm配置:1核 1G 20G硬盤
-
網(wǎng)絡(luò):
192.168.241.150 es-node1 192.168.241.151 es-node2 192.168.241.152 es-node3
-
創(chuàng)建一個新的用戶,es不能使用root用戶啟動
adduser hadoop
- 基礎(chǔ)配置(Linux限制設(shè)置)
-
修改
vm.max_map_count參數(shù)
sudo vi /etc/sysctl.conf # elasticsearch config start vm.max_map_count=262144 # elasticsearch config end
-
修改文件限制和最大線程數(shù)限制
sudo vi /etc/security/limits.conf # elasticsearch config start * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096 # elasticsearch config end
-
- 下載安裝es
-
下載解壓
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz sudo tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz -C /opt/
-
配置es
vi config/elasticsearch.yml # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: 設(shè)置集群的名稱 # cluster.name: cluster # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: 設(shè)置當(dāng)前節(jié)點的名稱,其他節(jié)點需要修改 # node.name: node-1 # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): 設(shè)置數(shù)據(jù)目錄 # path.data: /opt/elasticsearch/data # # Path to log files: 設(shè)置日志目錄 # path.logs: /opt/elasticsearch/logs # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): 設(shè)置暴露的IP地址,本機IP地址,其他節(jié)點需要修改 # network.host: 192.168.241.150 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: 設(shè)置集群中的其它節(jié)點(想對于當(dāng)前節(jié)點而言),單播列表,并不需要配置集群所有節(jié)點 # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["es-node2", "es-node3"] # # Bootstrap the cluster using an initial set of master-eligible nodes: 候選master節(jié)點,需要和node.name屬性相同 # cluster.initial_master_nodes: ["node-1", "node-2","node-3"] # # For more information, consult the discovery and cluster formation module documentation.
-
配置JVM參數(shù)
vi config/jvm.options # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space # Set Xmx and Xms to no more than 50% of your physical RAM. -Xms512m -Xmx512m
-
啟動
bin/elasticsearch -d # 后臺啟動 # 記錄PID并且后臺啟動 ./bin/elasticsearch -p /tmp/elasticsearch-pid -d
-
編寫啟動腳本,方便使用
#!/bin/bash ES_HOME=/opt/elasticsearch ES_PID_FILE=/tmp/elasticsearch-pid action=$1 case $action in 'start') if [ -e $ES_PID_FILE ] then echo 'es already started!' else echo 'starting es begin......' $ES_HOME/bin/elasticsearch -p /tmp/elasticsearch-pid -d sleep 2 echo 'start es success.......' fi ;; 'stop') if [ -e $ES_PID_FILE ] then echo 'stopping es begin......' es_pid=`cat /tmp/elasticsearch-pid` kill -9 $es_pid sleep 3 rm /tmp/elasticsearch-pid echo 'stop es success........' else echo "es dosen't start!!" fi ;; *) echo 'not support!!' ;; esac
-
2. 部署kibana
-
下載解壓
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz sudo tar -zxvf kibana-7.2.0-linux-x86_64.tar.gz -C /opt/
-
配置
vi kibana/config/kibana.yml # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "192.168.241.150" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://es-node1:9200","http://es-node2:9200","http://es-node3:9200"]
-
啟動
./bin/kibana
-
停止
# 先獲取PID ps -ef | grep java # 暫停 kill -9 PID
-
1.png
3. 再k8s中部署fluentd
-
關(guān)于k8s中日志采集的方案可以參照官方文檔說明,這里采用node loggin agent的方式來處理。
2.png -
Fulentd說明
-
對容器化的應(yīng)用日志處理建議
- The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
- Logs should have a separate storage and lifecycle independent of nodes, pods, or containers.
-
日志采集流程
in_tail -> filter_grep -> out_stdout
-
Event數(shù)據(jù)結(jié)構(gòu)說明:
- tag:消息從哪里來的
- time:時間
- record:log的內(nèi)容
-
-
下載
git clone https://github.com/fluent/fluentd-kubernetes-daemonset
-
修改es的鏈接配置,修改fluentd-kubernetes-daemonset文件夾中的
fluentd-daemonset-elasticsearch-rbac.yaml
的deploymentspec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: serviceAccount: fluentd serviceAccountName: fluentd tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST value: "192.168.241.150" # 修改為es的主機名或者IP地址 - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" # X-Pack Authentication # ===================== - name: FLUENT_ELASTICSEARCH_USER value: "elastic" - name: FLUENT_ELASTICSEARCH_PASSWORD value: "changeme"
-
安裝
kubectl apply -f fluentd-daemonset-elasticsearch-rbac.yaml
4.通過kibanna創(chuàng)建index pattern對采集到的日志進(jìn)行索引
-
登陸kibana--->management--->Index Patterns點擊創(chuàng)建
3.png -
查看
4.png