Centos7安装ELK集群

1585364631
2022-04-03 / 0 评论 / 269 阅读 / 正在检测是否收录...

Centos7安装ELK集群

0.准备环境

  • 系统:centos7
  • Hadoop分布式部署完毕
  • hive部署完毕
  • 准备文件:

    • elasticsearch-7.17.1-linux-x86_64.tar.gz
    • kibana-7.17.1-linux-x86_64.tar.gz
    • logstash-7.17.1-linux-x86_64.tar.gz

1.所有节点安装ElasticSearch

1.解压压缩包

tar -xzf elasticsearch-7.17.1-linux-x86_64.tar.gz -C /

2.添加ElasticSearch的环境变量(每台机都执行)

vim /etc/profile
#末尾加入环境变量
export ELASTICSEARCH_HOME=/elasticsearch-7.17.1
export PATH=$PATH:$ELASTICSEARCH_HOME/bin

#刷新环境变量
source /etc/profile

3.修改ElasticSearch环境使用自己的jdk

cd /elasticsearch-7.17.1
vim bin/elasticsearch-env
#第二行插入java环境变量
JAVA_HOME="/elasticsearch-7.17.1/jdk"

4.修改垃圾回收器配置参数

vim config/jvm.options
#在大约52行
#####修改前######
-XX:+UseConcMarkSweepGC
################
#####修改后######
-XX:+UseG1GC
################

5.修改主配置文件设置集群

vim config/elasticsearch.yml
#####yaml文件,注意格式#####
  cluster.name: es
  node.name: node-x #节点名,各台机器不同 (1-3)
  network.host: 0.0.0.0
  http.port: 9200
  discovery.seed_hosts: ["master", "slave1", "slave2"]
  cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
#添加跨域第三方插件可以请求es
  http.cors.enabled: true
  http.cors.allow-origin: "*"

6.修改普通用户可创建的最大线程数(每台机都执行)

vim /etc/security/limits.conf
#末尾追加
es soft nofile 65535
es hard nofile 65535
es soft nproc 4096
es hard nproc 4096
# End of file

7.设置最大虚拟内存区域(每台机都执行)

vim /etc/sysctl.conf
#末尾追加
vm.max_map_count = 262144
ulimit -n 65536

#手动执行重新加载虚拟内存
sysctl -p

8.批量拷贝

cd /
scp -r elasticsearch-7.17.1 root@slave1:/
scp -r elasticsearch-7.17.1 root@slave2:/

source /etc/profile

9.修改另外两台机的主配置文件

######slave1######
cd /elasticsearch-7.17.1
vim config/elasticsearch.yml
node.name: node-2 #节点名,各台机器不同 (1-3)
##################
######slave2######
cd /elasticsearch-7.17.1
vim config/elasticsearch.yml
node.name: node-3 #节点名,各台机器不同 (1-3)
##################

2.创建新用户设置密码修改属组

因为ElasticSearch不支持root用户启动,所以创建一个新用户

三台机同时执行

#新建用户
useradd es

#设置密码000000
passwd es

#修改文件属组
cd /
chown -Rf es:es /elasticsearch-7.17.1/

3.切换用户并启动

三台机同时执行

#切换用户
su es

#启动ElasticSearch
elasticsearch

4.验证启动状态

http://任意一台机IP:9200/_cluster/health?pretty

{
  "cluster_name" : "es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 3,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

status:

  • red:集群失败
  • yellow:基本分片可用,备份不可用
  • green:集群健康,所有分片和备份都可用

5.前台停止es,通过后台启动

ctrl+c强制终止

后台启动命令

elasticsearch -d

6.安装kibana

单节点:

1.解压文件

tar -xzf kibana-7.17.1-linux-x86_64.tar.gz -C /

2.修改配置文件

cd /kibana-7.17.1-linux-x86_64
vim config/kibana.yml
######修改内容######
 server.host: "0.0.0.0"
 server.name: "master" #主机名
 elasticsearch.hosts: ["http://10.107.116.10:9200"] #es地址
 kibana.index: ".kibana"
 i18n.locale: "zh-CN"     #中文
 #elasticsearch.username: "admin"    #账号
 #elasticsearch.password: "000000"    #密码
###################

3.创建kibana用户并且修改属组

创建用户

useradd kibana
passwd kibana #设置密码000000

修改属组

cd /
chown -Rf kibana:kibana /kibana-7.17.1-linux-x86_64/

4.后台启动kibana

su kibana
cd /kibana-7.17.1-linux-x86_64/
nohup bin/kibana >> /dev/null 2>&1 &
exit

5.验证kibana启动

访问地址

http://10.107.116.10:5601/

7.安装logstash

单节点:

1.解压文件

tar -xzf logstash-7.17.1-linux-x86_64.tar.gz -C /

2.准备patterns

  • 新建patterns文件夹
cd /logstash-7.17.1/
mkdir patterns
  • 创建java文件
vim patterns/java
###############
# user-center
MYAPPNAME ([0-9a-zA-Z_-]*)
# RMI TCP Connection(2)-127.0.0.1
MYTHREADNAME ([0-9a-zA-Z._-]|\(|\)|\s)*
###############

3.修改配置文件

  • 创建配置文件 logstash.conf
  • 在使用如下配置时,需修改内容项:

    • filter中的 两个 patterns_dir
    • ElasticSearch有账号密码就添加账号密码
vim config/logstash.conf
###########################
input {
  beats {
    port => 5044
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri 
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
  if [fields][docType] == "sys-log" {
    grok {
      patterns_dir => ["/logstash-7.17.1/patterns"]
      match => { "message" => "\[%{NOTSPACE:appName}:%{IP:serverIp}:%{NOTSPACE:serverPort}\] %{TIMESTAMP_ISO8601:logTime} %{LOGLEVEL:logLevel} %{WORD:pid} \[%{MYAPPNAME:traceId}\] \[%{MYTHREADNAME:threadName}\] %{NOTSPACE:classname} %{GREEDYDATA:message}" }
      overwrite => ["message"]
    }
    date {
      match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS Z"]
    }
    date {
      match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"]
      target => "timestamp"
      locale => "en"
      timezone => "+08:00"
    }
    mutate {
      remove_field => "logTime"
      remove_field => "@version"
      remove_field => "host"
      remove_field => "offset"
    }
  }
  if [fields][docType] == "point-log" {
    grok {
      patterns_dir => ["/logstash-7.17.1/patterns"]
      match => {
        "message" => "%{TIMESTAMP_ISO8601:logTime}\|%{MYAPPNAME:appName}\|%{WORD:resouceid}\|%{MYAPPNAME:type}\|%{GREEDYDATA:object}"
      }
    }
    kv {
        source => "object"
        field_split => "&"
        value_split => "="
    }
    date {
      match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS Z"]
    }
    date {
      match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"]
      target => "timestamp"
      locale => "en"
      timezone => "+08:00"
    }
    mutate {
      remove_field => "message"
      remove_field => "logTime"
      remove_field => "@version"
      remove_field => "host"
      remove_field => "offset"
    }
  }
  if [fields][docType] == "mysqlslowlogs" {
    grok {
        match => [
          "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)",
          "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)",
          "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)",
          "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)"
        ]
    }
    date {
      match => ["timestamp_mysql","yyyy-MM-dd HH:mm:ss.SSS","UNIX"]
    }
    date {
      match => ["timestamp_mysql","yyyy-MM-dd HH:mm:ss.SSS","UNIX"]
      target => "timestamp"
    }
    mutate {
      convert => ["query_time", "float"]
      convert => ["lock_time", "float"]
      convert => ["rows_sent", "integer"]
      convert => ["rows_examined", "integer"]
      remove_field => "message"
      remove_field => "timestamp_mysql"
      remove_field => "@version"
    }
  }
}

output {
  if [fields][docType] == "sys-log" {
    elasticsearch {
      hosts => ["http://10.107.116.10:9200"]
      index => "sys-log-%{+YYYY.MM.dd}"
      #user => "elastic"
      #password => "000000"
    }
  }
  if [fields][docType] == "point-log" {
    elasticsearch {
      hosts => ["http://10.107.116.11:9200"]
      index => "point-log-%{+YYYY.MM.dd}"
      routing => "%{type}"
      #user => "elastic"
      #password => "000000"
    }
  }
  if [fields][docType] == "mysqlslowlogs" {
    elasticsearch {
      hosts => ["http://10.107.116.12:9200"]
      index => "mysql-slowlog-%{+YYYY.MM.dd}"
      #user => "elastic"
      #password => "000000"
    }
  }
}

###########################
  • 修改logstash配置
vim config/logstash.yml
################
api.http.host: 0.0.0.0
################

4.后台启动logstash

cd /logstash-7.17.1/
nohup bin/logstash -f config/logstash.conf &

5.验证启动

ps -ef | grep logstash #查看相关进程
cat config/logstash.conf

6.报错解决

如果启动出现下面错误

[2022-03-31T19:22:29,834][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
    at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?]
    at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?]
    at logstash_minus_7_dot_17_dot_1.lib.bootstrap.environment.<main>(/logstash-7.17.1/lib/bootstrap/environment.rb:94) ~[?:?]

解决办法

rm -rf /logstash-7.17.1/data/.lock

重新启动

0

评论 (0)

取消