采用的是filebeat采集日志,Redis做日志存儲,logstash消費處理日志,將處理過的日志存儲到ES,kibana做日志展示,Elastalert做監(jiān)控告警長時間的慢日志。
1. ELK架構(gòu)的安裝?
2. Elastalert 安裝
2.1 官方Git代碼
采用的是Docker方式部署
[root@centos2 opt]# git clone https://github.com/Yelp/elastalert.git
[root@centos2 opt]# cd elastalert
[root@centos2 elastalert]# ls
changelog.md docs Makefile requirements.txt tests
config.yaml.example elastalert pytest.ini setup.cfg tox.ini
docker-compose.yml example_rules README.md setup.py
Dockerfile-test LICENSE requirements-dev.txt supervisord.conf.example
# 創(chuàng)建Dockerfile
[root@centos2 elastalert]# cat Dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade -y
RUN apt-get -y install build-essential python3 python3-dev python3-pip libssl-dev git
WORKDIR /home/elastalert
ADD requirements*.txt ./
RUN pip3 install -r requirements-dev.txt
# 編譯容器
[root@centos2 elastalert]# docker build -t elastalert:1 .
[root@centos2 elastalert]# docker run -itd --name elastalert -v `pwd`/:/home/elastalert/ elastalert:1
[root@centos2 elastalert]# docker exec -it elastalert bash
root@45f77d2936d4:/home/elastalert# pip install elastalert
2.2 集成Git代碼
因官方的docker代碼多年未更新,導(dǎo)致很多問題,而且也為集成釘釘插件,所我特根據(jù)我個人的需求,集成了釘釘插件后,并重寫了?
?Dockerfile?
?。我已將相關(guān)文件上傳到我的gitee,并與官方代碼合成,有需要的直接拉即可。
git clone https://gitee.com/rubbishes/elastalert-dingtalk.git
cd elastalert
docker build -t elastalert:1 .
docker run -itd --name elastalert -v `pwd`/:/home/elastalert/ elastalert:1
3.配置
3.1 filebeat配置
root@mysql-178 filebeat-7.6.0-linux-x86_64 # vim filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs
type log
# Change to true to enable this input configuration.
enabledtrue
# Paths that should be crawled and fetched. Glob based paths.
paths
/usr/local/mysql/data/mysql-178-slow.log
#- c:programdataelasticsearchlogs*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^# Time']
exclude_lines'^# Time|^/usr/local/mysql/bin/mysqld|^Tcp port|^Time'
multiline.pattern'^# Time|^# User'
multiline.negatetrue
multiline.match after
#配置filebeat是否重頭開始讀取日志,默認(rèn)是重頭開始的。
#tail_files: true
tags"mysql-slow-log"
#============================= Filebeat modules ===============================
filebeat.config.modules
# Glob pattern for configuration loading
path $ path.config /modules.d/*.yml
# Set to true to enable config reloading
reload.enabled ture
# Period on which files under path should be checked for changes
reload.period 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings
index.number_of_shards1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name10.228.81.178
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
# hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors
add_host_metadata ~
add_cloud_metadata ~
add_docker_metadata ~
add_kubernetes_metadata ~
#刪除字段
drop_fields
fields"beat""offset" "prospector"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
# 剛開始調(diào)試的時候可以開啟debug模式,后期注釋了就行了
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
#輸出到Redis
output.redis
hosts"10.228.81.51:6379"
password"123456"
db"1"
key"mysqllog"
timeout5
datatype list
3.2 logstash配置
建議使用docker與二進(jìn)制方式部署,rpm包部署的時候提示不支持ruby語句。
input
redis
host => "10.228.81.51"
port => 6379
password => "123456"
db => "1"
data_type => "list"
key => "mysqllog"
filter
json
source => "message"
grok
match => "message" "(?m)^#s+User@Host:s+%{USER:user}[[^]]+]s+@s+(?:(?<clienthost>S*) )?[(?:%{IPV4:clientip})?]s+Id:s+%{NUMBER:row_id:int} #s+Query_time:s+%{NUMBER:query_time:float}s+Lock_time:s+%{NUMBER:lock_time:float}s+Rows_sent:s+%{NUMBER:rows_sent:int}s+Rows_examined:s+%{NUMBER:rows_examined:int} s*(?:use %{DATA:database};s* )?SETs+timestamp=%{NUMBER:timestamp}; s*(?<sql>(?<action>w+).*;)s*(?: #s+Time)?.*$"
#替換時間戳
date
locale => "en"
match => "timestamp""UNIX"
target => "@timestamp"
#因MySQL使用的是UTC時間跟我們的時間差八小時,所以我們需要將時間戳加8小時再傳給ES
ruby
code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*3600)"
output
stdout
#開啟debug模式,調(diào)試的時候使用,調(diào)試完成后建議關(guān)閉,不然日志輸出真的大,特別在監(jiān)控mysql-binglog那種的時候
codec => rubydebug
# 這里判斷tags標(biāo)簽是否等于 mysql-slow-log,如果是則輸出到es,并生成索引為 mysql-slow-log-年月日
if tags 0 == "mysql-slow-log"
elasticsearch
hosts => "10.228.81.51:9200"
index => "%{[tags][0]}-%{+YYYY.MM.dd}"
3.3 Elastalert 配置
3.3.1 config.yaml 配置
先復(fù)制一下默認(rèn)的
cp config.yaml.example config.yaml
然后酌情修改,如下
# 主要是配置es的地址與端口,其他的無需配置
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder example_rules
run_every
minutes1
buffer_time
minutes15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host10.228.81.51
# The Elasticsearch port
es_port9200
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index elastalert_status
writeback_alias elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit
days2
通過我的Git拉取的,直接修改??config.yaml?
? 文件配置即可,修改點與上大同。
3.3.2 rule.yaml配置
這主要是配置你的告警規(guī)則的
釘釘通知
cd example_rules
cat mysql_rule.yaml
# 配置es的主機與端口
es_host: 10.228.81.51
es_port: 9200
#不使用https協(xié)議
use_ssl: False
#定義規(guī)則唯一標(biāo)識,需要唯一性。
name: My-Product Exception Alert
# 指定規(guī)則類型
## 支持any,blacklist,whitelist,change,frequency,spike,flatline,new_term,cardinality 類型
### frequency:
type: frequency在相同 query_key條件下,timeframe 范圍內(nèi)有num_events個被過濾出 來的異常;
# 指定索引名,支持通配符,正則匹配與kibana中一樣
index: mysql-*
#時間出發(fā)的次數(shù)
num_events: 1
#和num_events參數(shù)關(guān)聯(lián),也就是說在5分鐘內(nèi)觸發(fā)1次會報警
timeframe:
minutes: 5
# 設(shè)置告警規(guī)則
filter:
- query:
query_string:
# 這里的語法使用的 es中的查詢語法,測試的時候可以使用kibana中的查詢來過濾出自己想要的內(nèi)容,然后粘貼至此
query: "user:eopuser OR user:root"
# 指定需要的字段,如果不指定的話默認(rèn)是所有字段
include: ["message","clientip","query_time"]
# 告警方式,我這里使用的是釘釘,支持email與企業(yè)微信
alert:
- "elastalert_modules.dingtalk_alert.DingTalkAlerter"
# 配置你機器人的api
dingtalk_webhook: "https://oapi.dingtalk.com/robot/send?access_token=96eabeeaf956bb26128fed1259cxxxxxxxxxxfa6b2baeb"
# 釘釘標(biāo)題,也是機器的關(guān)鍵字
dingtalk_msgtype: "text"
#alert_subject: "test"
# 指定內(nèi)容格式
alert_text: "
text: 1
IP: {}
QUERYTIME: {}
"
alert_text_args:
- clientip
- query_time
郵件通知
# 與釘釘沒多大區(qū)別就是需要配置一下 email的一些相關(guān)信息
root@45f77d2936d4:/home/elastalert/example_rules# cat myrule_email.yaml
es_host: 10.228.81.51
es_port: 9200
use_ssl: False
#name屬性要求唯一,這里最好能標(biāo)示自己的產(chǎn)品
name: My-Product Exception Alert
#類型,我選擇任何匹配的條件都發(fā)送郵件警告
type: any
#需要監(jiān)控的索引,支持通配
index: mysql-*
num_events: 50
timeframe:
hours: 4
filter:
- query:
query_string:
query: "user:eopuser OR user:root"
#email的警告方式
alert:
- "email"
#增加郵件內(nèi)容
alert_text: "test"
#SMTP協(xié)議的郵件服務(wù)器相關(guān)配置(我這里是阿里企業(yè)郵箱)
smtp_host: smtp.mxhichina.com
smtp_port: 25
#用戶認(rèn)證文件,需要user和password兩個屬性
smtp_auth_file: smtp_auth_file.yaml
email_reply_to: test@test.com
from_addr: test@test.com
#需要接受郵件的郵箱地址列表
email:
- "test@test.com"
# 因為我們的賬號與密碼也寫在了yaml文件中了,所以我們需要在同級目錄下配置一下
root@45f77d2936d4:/home/elastalert/example_rules# cat smtp_auth_file.yaml
user: "test@test.com"
password: "123456"
注意: 如果是使用我的代碼構(gòu)建的,需修改 ??example_rules/myrule.yaml?
? 規(guī)則文件,其他規(guī)則名無效,或修改我的??run.sh?
?腳本也可。
3.3.3 安裝dingtalk插件
這是因為使用的原版的制作無dingtalk插件,所以需要手動安裝。如采用我的Dockerfile生成的話是已經(jīng)有了的,可以省略
git clone https://github.com.cnpmjs.org/xuyaoqiang/elastalert-dingtalk-plugin.git
cd elastalert-dingtalk-plugin/
# 將elastalert_modules目錄拷貝到 elastalert 根目錄下
cp -r elastalert_modules ../elastalert/
4. 啟動
啟動順序
ES > Kibana > elastalert > Redis > Filebeat > Logstash
其實啟動順序主要需要將ES啟動先,這樣kibana才能起來,然后為了能告警所以我們需要先把elastalert起起來,接著Redis起來為filebeat收集日志做準(zhǔn)備,filebeat啟動收集日志到Redis,logstash 最后啟動 消費Redis中的數(shù)據(jù),存到ES。
其他的啟動我剛開始的文檔中都有,我就不多說了,主要是針對elastalert的啟動需要多說一嘴。
一樣,如果是使用我的代碼生成的docker,那么無需操作這一步。
# 進(jìn)入容器
[root@centos2 elastalert]# docker exec -it elastalert bash
# 先做個測試規(guī)則文件有沒有問題
root@45f77d2936d4:/home/elastalert# root@45f77d2936d4:/home/elastalert# elastalert-test-rule example_rules/myrule.yaml
# 沒問題就后臺運行好了
root@45f77d2936d4:/home/elastalert# nohup python3 -m elastalert.elastalert --verbose --rule example_rules/myrule.yaml &
root@45f77d2936d4:/home/elastalert# exit
5、 擴展
elastalert Dockerfile 文件編寫
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade -y && apt-get install -y build-essential python3 python3-dev python3-pip libssl-dev git && echo "Asia/Shanghai" > /etc/timezone
WORKDIR /home/elastalert
ADD ./* ./
RUN pip install elastalert && ln -sf /dev/stdout elastalert.log && ln -sf /dev/stderr elastalert.log
CMD ["/bin/bash","run.sh"]
運行
docker run -itd --name elastalert -v /root/elastalert/:/home/elastalert/ -v /etc/localtime:/etc/localtime elastalert:1
6、官方文檔
??https://github.com/xuyaoqiang/elastalert-dingtalk-plugin??
本文摘自 :https://blog.51cto.com/u