x-pack搭建破解請參考之前的安裝文檔ELK+filebeat+x-pack平臺搭建
1. 一些命令
- 查詢所有用戶
[elk@elk ~]$ curl -XGET -u elastic 'localhost:9200/_xpack/security/user?pretty'
配置http ssl認證后需要使用證書
[elk@elk ~]$ curl -XGET -u elastic 'https://localhost3:9200/_xpack/security/user?pretty' --key client.key --cert client.cer --cacert client-ca.cer -k -v
- 查詢所有roles
[elk@elk ~]$ curl -XGET -u elastic 'localhost:9200/_xpack/security/role?pretty'
- 創建用戶[chengen]
[elk@elk ~]$ curl -X POST -u elastic "localhost:9200/_xpack/security/user/chengen" -H 'Content-Type: application/json' -d'
{
"password" : "your passwd",
"roles" : [ "admin"],
"full_name" : "chengen",
"email" : "xxxxxx@qq.com",
"metadata" : {
"intelligence" : 7
}
}
'
畫外音:加上“-u elastic”是因為只有elastic用戶有管理用戶權限,另外,請求參數后面可以帶上?pretty,這樣返回的格式會好看一點兒
- 修改密碼
curl -X POST -u elastic "localhost:9200/_xpack/security/user/test/_password" -H 'Content-Type: application/json' -d'
{
"password" : "your passwd"
}
'
修改username為test的密碼,注意test是username不是fullname
- 禁用/啟動/刪除用戶
curl -X PUT -u elastic "localhost:9200/_xpack/security/user/test/_disable"
curl -X PUT -u elastic "localhost:9200/_xpack/security/user/test/_enable"
curl -X DELETE -u elastic "localhost:9200/_xpack/security/user/test"
創建beats_admin1的roles,該用戶組對filebeat有all權限,對.kibana有manage,read,index權限
curl -X POST -u elastic 'localhost:9200/_xpack/security/role/beats_admin1' -H 'Content-Type: application/json' -d '{
"indices" : [
{
"names" : [ "filebeat*" ],
"privileges" : [ "all" ]
},
{
"names" : [ ".kibana*" ],
"privileges" : [ "manage", "read", "index" ]
}
]
}'
安全API:https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api.html
2. ElasticSearchHead
當你再次打開瀏覽器ElasticSearchHead插件的時候,會提示你輸入密碼
3. kibana再次打開kibana的時候會提示你輸入密碼
菜單欄功能會增加monitor等,可以查看集群狀態,節點狀態的監控信息,如
啟用了X-Pack安全性之后,如果你加載一個Kibana指示板,該指示板訪問你沒有權限查看的索引中的數據,那么你將得到一個索引不存在的錯誤。X-Pack安全性目前還沒有提供一種方法來控制哪些用戶可以加載哪些儀表板。
4.基于角色的權限控制
該功能的入口在 Management -> Users/Roles。Users 可以方便的管理用戶并且對其賦予角色,角色和權限掛鉤。Roles 可以方便的管理角色,對其進行賦權。Role 是 Permission 的集合,Permission 是 Privilege 的集合,下面來說說權限:
集群權限(Cluster Privilege);
Run As Privileges:可以使得新建角色擁有所選用戶集的權限;
索引權限(Index Privilege):
Indices:指定在哪些索引上賦權;
Privileges:指定賦予哪些權限;
Granted Documents Query(可選):指定在哪些 Query 上賦權;
Granted Fields(可選):指定在哪些 fields 上賦權;
只將單個索引的權限放開給某些用戶
創建role
授予role的elasticsearch權限
授予role的kibana權限
5.x-pack權限介紹
內置賬號
username | role | 權限 |
---|---|---|
elastic | superuser | 內置的超級用戶 |
kibana | kibana_system | 用戶kibana用來連接elasticsearch并與之通信。Kibana服務器以該用戶身份提交請求以訪問集群監視API和 .kibana索引。不能訪問index |
logstash_system | logstash_system | 用戶Logstash在Elasticsearch中存儲監控信息時使用 |
Security-Roles權限
role | 釋義 |
---|---|
ingest_admin | 授予訪問權限以管理所有索引模板和所有攝取管道配置。這個角色不能提供創建索引的能力; 這些特權必須在一個單獨的角色中定義。 |
kibana_dashboard_only_user | 授予對Kibana儀表板的訪問權限以及對.kibana索引的只讀權限。 這個角色無法訪問Kibana中的編輯工具 |
kibana_system | 授予Kibana系統用戶讀取和寫入Kibana索引所需的訪問權限,管理索引模板并檢查Elasticsearch集群的可用性。 此角色授予對.monitoring- 索引的讀取訪問權限以及對.reporting- 索引的讀取和寫入訪問權限。 |
kibana_user | 授予Kibana用戶所需的最低權限。 此角色授予訪問集群的Kibana索引和授予監視權限。 |
logstash_admin | 授予訪問用于管理配置的.logstash *索引的權限。 |
logstash_system | 授予Logstash系統用戶所需的訪問權限,以將系統級別的數據(如監視)發送給Elasticsearch。不應將此角色分配給用戶,因為授予的權限可能會在不同版本之間發生變化。此角色不提供對logstash索引的訪問權限,不適合在Logstash管道中使用。 |
machine_learning_admin | 授予manage_ml群集權限并讀取.ml- *索引的訪問權限 |
machine_learning_user | 授予查看X-Pack機器學習配置,狀態和結果所需的最低權限。此角色授予monitor_ml集群特權,并可以讀取.ml-notifications和.ml-anomalies *索引,以存儲機器學習結果 |
monitoring_user | 授予除使用Kibana所需的X-Pack監視用戶以外的任何用戶所需的最低權限。 這個角色允許訪問監控指標。 監控用戶也應該分配kibana_user角色 |
remote_monitoring_agent | 授予遠程監視代理程序將數據寫入此群集所需的最低權限 |
reporting_user | 授予使用Kibana所需的X-Pack報告用戶所需的特定權限。 這個角色允許訪問報告指數。 還應該為報告用戶分配kibana_user角色和一個授予他們訪問將用于生成報告的數據的角色。 superuser #授予對群集的完全訪問權限,包括所有索引和數據。 具有超級用戶角色的用戶還可以管理用戶和角色,并模擬系統中的任何其他用戶。 由于此角色的寬容性質,在將其分配給用戶時要格外小心 |
transport_client | 通過Java傳輸客戶端授予訪問集群所需的權限。 Java傳輸客戶端使用節點活性API和群集狀態API(當啟用嗅探時)獲取有關群集中節點的信息。 如果他們使用傳輸客戶端,請為您的用戶分配此角色。使用傳輸客戶端有效地意味著用戶被授予訪問群集狀態的權限。這意味著用戶可以查看所有索引,索引模板,映射,節點以及集群基本所有內容的元數據。但是,此角色不授予查看所有索引中的數據的權限 |
watcher_admin | 授予對.watches索引的寫入權限,讀取對監視歷史記錄的訪問權限和觸發的監視索引,并允許執行所有監視器操作 |
watcher_user | 授予讀取.watches索引,獲取觀看動作和觀察者統計信息的權限 |
集群權限和索引權限請參考
6. x-pack功能開關
Setting | Description |
---|---|
xpack.security.enabled | 設置為 false 可以關閉 X-Pack security 功能。需要在 elasticsearch.yml 和 kibana.yml 同時配置。 |
xpack.monitoring.enabled | 設置為 false 可以關閉 X-Pack monitoring 功能。 需要在elasticsearch.yml 和 kibana.yml 同時配置。 |
xpack.graph.enabled | 設置為 false 可以關閉 X-Pack graph 功能。 需要在elasticsearch.yml 和 kibana.yml 同時配置。 |
xpack.watcher.enabled | 設置為 false 可以關閉 Watcher 功能。 只需要在 elasticsearch.yml 配置。 |
xpack.reporting.enabled | 設置為 false 可以關閉 reporting 功能。 只需要在 kibana.yml 配置。 |
修改配置后需要重啟三個服務

nohup ./bin/logstash -f config/logstash.conf >/dev/null 2>&1 &
./bin/elasticsearch -d
./bin/kibana --verbose > kibana.log 2>&1 &
./filebeat -c filebeat_secure.yml -e -v
7. 對es集群,kibana,logsatsh,filebeat配置SSL,TLS和HTTPS安全
注意:
1.創建es證書證書一定要加--dns 和--ip否則后期通信會報錯,如7.5記錄的問題。
2.如果是集群請將es1中生成的證書elastic-stack-ca.p12拷貝到es2,es3并重新依次生成帶有dns和ip的elastic-certificates.p12證書
3.如果集群安裝x-pack是一定要配置證書加密的,否則集群無法正常啟動
4.可以參照我下面的步驟進行安全證書操作,也可以參考
配置證書,因為配置完才發現了這篇文檔,還沒來得及嘗試,感覺文章寫的配置證書方法比較簡單,有興趣同學可以嘗試一下。
7.1 es集群安全配置
- elasticsearch-certuti創建證書
bin/elasticsearch-certutil ca //創建了elastic-stack-ca.p12
bin/elasticsearch-certutil cert --ca config/certs/elastic-stack-ca.p12 --dns 192.168.100.203 --ip 192.168.100.203 //創建了elastic-certificates.p12
config目錄下創建certs文件夾,放入證書elastic-stack-ca.p12和elastic-certificates.p12
- 傳輸通信配置(第一步--集群節點間的認證)
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
- 內置用戶定義密碼(第二步,需要在配置HTTP ssl之前配置,因為設置密碼的命令通過不安全的http與集群通信)
bin/elasticsearch-setup-passwords interactive
- HTTP SSL加密配置(第三步--配置es集群的https)
對于http通信,Elasticsearch節點僅用作服務器,在elasticsearch.yml文件中指定如下
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: optional
綜上,在elasticsearch.yml配置中定義以下內容如下:
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: optional
更改完畢后需要重啟es集群此時訪問集群需要https認證。此時elasticsearch head需要配置https://192.168.100.203:9200/
-
報錯:
如果在elasticsearch.yml中配置了 xpack.security.authc.realms.pki1.type: pki
啟動kibana后,會出現登錄kibana地址時只能使用內置賬戶登錄,如elastic,kibana等,后創建的自定義賬戶test等不可以登錄。
解決:取消 xpack.security.authc.realms.pki1.type: pki 就恢復正常
7.2 kibana.yml安全配置
- 配置kibana到es的證書通信(生成elastic-stack-ca.pem)
cd /elk/elasticsearch-6.7.0
openssl pkcs12 -in config/certs/elastic-stack-ca.p12 -clcerts -nokeys -chain -out elastic-stack-ca.pem
config目錄下創建certs并將elastic-stack-ca.pem放入
- 配置kibana https訪問
創建kibana ssl證書(生成kibana.key和kibana.crt,在certs中新建ssll放入)
openssl req -subj '/CN=192.168.100.203/' -x509 -days $((100 * 365)) -batch -nodes -newkey rsa:2048 -keyout kibana.key -out kibana.crt
綜上,在kibana.yml中配置文件如下:
#注意es地址為https
elasticsearch.hosts: ["https://192.168.100.203:9200"]
xpack.security.enabled: true
elasticsearch.username: "kibana"
elasticsearch.password: "your passwd"
#配置kibana https登錄
server.ssl.enabled: true
server.ssl.certificate: /elk/kibana-6.7.0/config/certs/ssl/kibana.crt
server.ssl.key: /elk/kibana-6.7.0/config/certs/ssl/kibana.key
#kibana與es信任通信
elasticsearch.ssl.certificateAuthorities: config/certs/old/elastic-stack-ca.pem
elasticsearch.ssl.verificationMode: certificate
參考:
ELK的安全加固
kiban與logstash with x-pack
https://www.elastic.co/guide/en/kibana/6.3/configuring-tls.html
7.3 logstash安全配置
- 配置logstash與es集群通信
將7.2中生成的elastic-stack-ca.pem放入到/elk/logstash-6.7.0/config/certs中
在logstash中配置
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: your passwd
xpack.monitoring.elasticsearch.hosts: ["https://192.168.100.203:9200","https://192.168.100.202:9200","https://192.168.100.201:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/elk/logstash-6.7.0/config/certs/elastic-stack-ca.pem"
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 60s
xpack.monitoring.collection.pipeline.details.enabled: true
- 配置logstash與filebeat之間的通信
生成zip文件,包含instance.crt,instance.key
bin/elasticsearch-certutil cert --pem -ca config/certs/elastic-stack-ca.p12 --dns 192.168.100.203 --ip 192.168.100.203 ##生成zip文件,包含instance.crt,instance.key
轉換成filebeat可用格式,生成instance.pkcs8.key
openssl pkcs8 -in instance.key -topk8 -nocrypt -out instance.pkcs8.key
logstash-sample.conf配置
input {
beats {
port => 5044
ssl => true
ssl_key => "/elk/logstash-6.7.0/config/certs/instance/instance.pkcs8.key"
ssl_certificate => "/elk/logstash-6.7.0/config/certs/instance/instance.crt"
}
}
output {
elasticsearch {
user => "elastic"
password => "your passwd"
ssl => true
ssl_certificate_verification => true
cacert => "/elk/logstash-6.7.0/config/certs/elastic-stack-ca.pem"
hosts => ["https://192.168.100.203:9200/","https://192.168.100.202:9200/","https://192.168.100.201:9200/"]
index => "testlog"
}
}
output將數據加密傳輸到es集群,input對filebeat傳輸的數據進行加密
如果logstash可以正常啟動,說明配置正確
7.4 filebeat安全配置
將logsatsh中的ca證書:elastic-stack-ca.pem傳輸到beat所在的機器上,如:/opt/filebeat/certs/
修改filebeat.yml文件的output
output.logstash:
# The Logstash hosts
hosts: ["192.168.100.203:5044"]
ssl.certificate_authorities: ["/opt/filebeat/certs/elastic-stack-ca.pem"]
重啟filebeat,如果數據可以在kibana上正常顯示,便是配置正常
7.5 問題處理
1 “Host name '192.168.100.203' does not match the certificate subject provided by the pee”
報錯信息
[2019-06-24T13:57:08,032][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Host name '192.168.100.203' does not match the certificate subject provided by the peer (CN=instance)"}
解決:如果創建證書不添加dns和ip,es和logstash通過TLS通信時會此問題
PS 證書問題讓我很抓狂,配置es集群以及logstash,kibana,filebest加密用了兩天@@@,配置如下:
bin/elasticsearch-certutil cert --ca config/certs/elastic-stack-ca.p12 --dns 192.168.100.203 --ip 192.168.100.203
2 也是證書問題,和1類似
因為elk搭建在內網需要將外網(IP:1.2.3.4)的日志發送到elk,使用了frp內網穿透進行轉發,外網機器在通過外網IP+端口訪問時報錯,如下:
2019-07-01T16:29:17.253+0800 ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://1.2.3.4:5044)): x509: certificate is valid for 192.168.100.203, not 118.25.198.208
2019-07-01T16:29:17.253+0800 INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://1.2.3.4:5044)) with 1 reconnect attempt(s)
需要在7.3 logstash安全配置,配置logstash與filebeat之間的通信生成zip文件時,dns和IP加如外網IP
bin/elasticsearch-certutil cert --pem -ca config/certs/elastic-stack-ca.p12 --dns 192.168.100.203,1.2.3.4 --ip 192.168.100.203,1.2.3.4 ##生成zip文件,包含instance.crt,instance.key
接下來的步驟一致,只需要更改logstash中的證書instance.pkcs8.key和instance.crt,代碼如下,filebeat的ca證書不需要更改,數據就可以正常傳輸過來
input {
beats {
port => 5044
ssl => true
ssl_key => "/elk/logstash-6.7.0/config/certs/instance/instance.pkcs8.key"
ssl_certificate => "/elk/logstash-6.7.0/config/certs/instance/instance.crt"
}
}
參考:https://www.elastic.co/guide/en/elasticsearch/reference/6.7/configuring-tls.html#node-certificates
https://discuss.elastic.co/t/certificates-and-keys-for-kibana-and-logstash-with-x-pack/150390
關注證書的制作方法
關注證書的制作方法
3 “Could not index event to Elasticsearch”
報錯信息
][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.07.05", :_type=>"syslog", :_routing=>nil}, 2017-07-05T03:50:10.577Z 10.91.142.103 <179>Jul 5 09:20:10 10.91.126.1 TMNX: 45006214 Base PORT-MINOR-etherAlarmSet-2017 [Port 5/2/4]: Alarm Remote Fault Set], :response=>{"index"=>{"_index"=>"logstash-2017.07.05", "_type"=>"syslog", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index and [action.auto_create_index] ([.security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*]) doesn't match", "index_uuid"=>"na", "index"=>"logstash-2017.07.05"}}}}
解決:
去掉elasticsearch.yml中的限制即可
action.auto_create_index: ".security*,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*"
參考:https://discuss.elastic.co/t/index-404-error-in-logstash/91830
- es搭建集群時,需要集群正常,否則會報錯如下
[wp-node1] not enough master nodes discovered during pinging (found [[Candidate{node={wp-node1}{MxvGfK7lR6iIwww_frh_BOzcg}{yRKFsh2PT3aEzjI6IwwwG7n1Q}{192.168.100.203}{192.168.100.203:9300}{ml.machine_memory=16603000832, xpack.installed=true, ml.max_open_jobs=20,
][INFO ][o.e.d.z.ZenDiscovery ] [wp-node3] failed to send join request to master [{wp-node1}{MxvGfK7lR6iI_frh_BOzcg}{7Bc5VHF9QzCs-G2ERoAYaQ}{192.168.100.203}{192.168.100.203:9300}{ml.machine_memory=16603000832,
[wp-node2] failed to send join request to master [{wp-node1}{MxvGfK7lR6iI_frh_BOzcg}{M_VIMWHFSPmZT9n2D2_fZQ}{192.168.100.203}{192.168.100.203:9300}{ml.machine_memory=16603000832, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason [RemoteTransportException[[wp-node1][192.168.100.203:9300][internal:discovery/zen/join]]; nested: IllegalArgumentException[can't add node {wp-node2}{MxvGfK7lR6iI_frh_BOzcg}{gO2EuagySwq9d5AfDTK6qg}{192.168.1.202}{192.168.1.202:9300}{ml.machine_memory=12411887616, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, found existing node } with the same id but is a different node instance]; ]
5 此時curl訪問es集群需要使用證書訪問如
curl https://192.168.100.203:9200/_xpack/security/_authenticate?pretty \
--key client.key --cert client.cer --cacert client-ca.cer -k -v
訪問logstash
curl -v --cacert ca.crt https://192.168.100.203:5044
6.logtstash日志報錯如下:
[2019-07-03T18:18:29,035][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@192.168.100.203:9200/][Manticore::ConnectTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2019-07-03T18:18:29,035][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@192.168.100.203:9200/][Manticore::ConnectTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2019-07-03T18:18:29,515][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@192.168.100.203:9200/"}
原因:配置是對的,因為有些數據是可以發到elasticsearch上的,如果發的數據過多的話會發現這個問題
另:參考
http://www.51niux.com/?id=210
https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html