目录

前言

ELK是Elasticsearch、Logstash、Kibana的简称,其中Elasticsearch负责搜集、分析、存储数据三大功能,Kibana是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据

搭建ELK

架构图如图所示
ELK架构图
本次ELK搭建过程是基于docker的,可自行百度先安装docker,同时本次ELK组件都是基于7.6.0版本

搭建Elasticsearch

1 拉取镜像

docker pull elasticsearch:7.6.0

2 创建挂载目录
创建三个挂载目录,分别为config,data,plugins,放在/data/elk中

mkdir -p /data/elk

其中config存放elasticsearch.yml,jvm.options配置文件,plugins存放分词
vi elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

http.cors.enabled: true
http.cors.allow-origin: "*"

vi jvm.options

-Xms256m
-Xmx512m

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
14-:-XX:+UseG1GC
14-:-XX:G1ReservePercent=25
14-:-XX:InitiatingHeapOccupancyPercent=30

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m

其中特别注意配堆空间的最大值设置,否则太大超过服务器内存,就启动不起来了。我这里设置-Xms256m -Xmx512m

3 修改虚拟内存配置至提示的最低值

sysctl -w vm.max_map_count=262144

不设置启动时会报虚拟内存错误

4 启动es

切换到elk目录
cd /data/elk
执行下面命令

docker run -d --name es -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -e "discovery.type=single-node" -v  $PWD/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml  -v $PWD/data/:/data -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options   -v $PWD/plugins/:/usr/share/elasticsearch/plugins elasticsearch:7.6.0

通过挂载方式启动,最后访问http:ip:9200 即可看到es版本信息,说明安装成功

{
  "name" : "11c2eb95745a",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "4Q92plG7RxC-gAqC0GiIZQ",
  "version" : {
    "number" : "7.6.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
    "build_date" : "2020-02-06T00:09:00.449973Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
搭建Logstash

1 拉取镜像

docker pull logstash:7.6.0

2 获取配置文件
先启动logstash,再获取其config与pipeline下的所有文件,最后删掉该容器

docker run -d -p 5044:5044 --name logstash  logstash:7.6.0

3 创建挂载目录

mkdir -p /data/elk/logstash

4 拷贝文件

docker cp logstash:/usr/share/logstash/config  /data/elk/logstash/
docker cp logstash:/usr/share/logstash/pipeline /data/elk/logstash/

5 修改logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://your es ip:9200" ]

6 删除容器

docker rm -f logstash

7 启动logstash

docker run -d -p 5044:5044 \
-v /data/elk/logstash/config:/usr/share/logstash/config \
-v /data/elk/logstash/pipeline:/usr/share/logstash/pipeline \
--name logstash \
logstash:7.6.0
搭建Kibana

1 拉取镜像

docker pull logstash:7.6.0

2 创建挂载目录

mkdir -p /data/elk/kibana

创建配置文件kibana.yml并挂载到kibana下

vi kibana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://your es ip:9200"]
i18n.locale: "zh-CN"

3 启动kibana

docker run -d --name kibana -p 5601:5601 -v /data/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.6.0

成功后浏览器输入地址http://ip:5601 即可看到如下地址
kibana控制台

MySQL数据同步到ES

首先需要在/data/elk/logstash/pipeline中新建mysql.conf文件,其中一些配置文件说明可参考
官方配置

input {
    jdbc {
        jdbc_connection_string =>"jdbc:mysql://localhost:3306/test?characterEncoding=UTF-8&useSSL=false&autoReconnect=true"
        jdbc_user => "root"
        jdbc_password => "3993399Az!"
        #JDBC驱动程序库
        jdbc_driver_library => "/data/elk/logstash/pipeline/mysql-connector-java-8.0.13.jar" 
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_paging_enabled => "true"
        jdbc_page_size => "50000"
        record_last_run => true
        #记录上次运行结果的文件位置
        last_run_metadata_path => "/data/elk/logstash/pipeline/lastvalue.txt"
        #是否使用数据库某一列的值,
        use_column_value => true
        tracking_column => "id"
        tracking_column_type => "numeric"
        #如果为true则会清除last_run_metadata_path记录,即重新开始同步数据
        clean_run => false
        statement => "SELECT * FROM user"
        schedule => "* * * * *"
        #索引类型
        type => "jdbc"
    }
}
output {
    elasticsearch {
        hosts => ["ip:9200"]
        index => "user"
        document_id => "%{id}"
    }
    stdout {
        codec => json_lines
    }
}

同时将上传驱动包到jdbc_driver_library的包下

然后修改config目录下的pipeline.yml文件,需要指定mysql.conf文件路径

- pipeline.id: main
  path.config: "/data/elk/logstash/pipeline/mysql.conf"

重启logstash

Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐