基于devops利用微服务引入三方服务---filebeat与logstash分别配合redis采集比较

1、redis作用

redis除了作为nosql使用外,还有个比较常用的用处是消息队列使用,在elk栈中采集的日志量有时候非常大,logstash能够处理的最大日志数量也就大几千,超过这个数量会导致日志输入的堵塞,这个时候如果对采集的数据进行一个排序就像排队买票一样,这样就能缓解入口的紧张有序的去从redis里读取数据,前面已经说过了生产环境尽量还是使用filebeat来采集日志输送到logstash,但是为了做个比较,还是继续以下面两种方式来对日志进行采集处理

  • filebeat采集nginx访问日志、messages日志,写入到redis,logstash从redis提取数据
  • logstash采集nginx访问日志、messages日志,写入到redis,logstash从redis提取数据

2、filebeat写入数据到redis

filebeat配置
cat filebeat.yml

filebeat.prospectors:
- input_type: log
  enabled: true
  paths:
    - /var/log/nginx_access_log
  json.keys_under_root: true
  json.overwrite_keys: true
  fields:
    filetype: nginx-log
  fields_under_root: true

- input_type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    filetype: systemlog
  fields_under_root: true

registry_file: /data/registry

output.redis:
  enabled: true
  hosts: ["redis:6379"]
  key: nginx-log
  password: Lanbroad202
  db: 0

ps: 不需要预先创建redis的key,只需要事先设置redis密码即可

redis存储数据的状况如下
08api68
redis不改变数据结构,也不能对不同的日志做区分,都统一存入key中 ,数据类型list
ps: 之所以redis这里有数据存在是因为还没有启动logstash服务,启动logstash服务后数据会转移到logstash,redis只是数据排队的一个临时空间,logstash也是,都属于队列,真正存储数据的是es

3、logstash从redis中提取数据(filebeat采集的数据)

logstash配置
cat logstash.conf

input {
    redis {
    key => "nginx-log"
    host => "redis"
    password => "Lanbroad202"
    port => 6379
    db => "0"
    data_type => "list"
    type  => "one"
    }
    
    redis {
    key => "nginx-log"
    host => "redis"
    password => "Lanbroad202"
    port => 6379
    db => "0"
    data_type => "list"
    type  => "two"
    }
}

output {
  
  if [filetype] == "nginx-log" {
      elasticsearch {
      hosts => ["http://elasticsearch:9200"]
      index => "nginx-log-%{+YYYY.MM.dd}"
      } 
  }
 
  if [filetype] == "systemlog" {
      elasticsearch {
      hosts => ["elasticsearch:9200"]
      index => "systemlog-%{+YYYY.MM.dd}"
      }
  }
  stdout{codec => rubydebug}

}

启动logstash,查看redis
08api69
无数据了,说明临时存储的数据都输入到logstash了

最后查看elastichd、kibana
08api70

08api72

4、logstash写入数据到redis

清空上面所有数据方便测试
为了区分分别在两个节点上部署logstash,150节点上的logstash用于采集日志并写入到redis;151的logstash从redis提取日志,下面是这两个logstash的compose文件

cat docker-compose.yml

version: "3"
services:
  logstash2:
    image: logstash:7.7.0
    networks:
      - elk_overlay
      - database_overlay
    volumes:
      - "$WORK_HOME_LOGSTASH/pipeline:/usr/share/logstash/pipeline"
      - "/etc/localtime:/etc/localtime:ro"
      - $WORK_HOME_NGINX/logs/nginx_access_log:/var/log/nginx_access_log
      - /var/log/messages:/var/log/messages
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  elk_overlay:
    external: true
  database_overlay:
    external: true

配置文件
cat logstash.conf

input {
 
    file {
       path => "/var/log/nginx_access_log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
     file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}

output {
 
    if [type] == "nginx-log" {
       redis {
       host => "redis"
       port => "6379"
       password => "Lanbroad202"
       db => "0"
       key => "nginx-log"
       data_type => "list"
       }
    }
    
    if [type] == "system" {
       redis {
       host => "redis"
       port => "6379"
       password => "Lanbroad202"
       db => "0"
       key => "nginx-log"
       data_type => "list"
      }
    }

}

启动服务
docker stack deploy -c docker-compose.yml 150
查看redis
08api73

5、logstash从redis提取数据(logstash采集的数据)

cat docker-compose.yml

version: "3"
services:
  logstash1:
    image: logstash:7.7.0
    networks:
      - elk_overlay
      - database_overlay
    volumes:
      - "$WORK_HOME_LOGSTASH/pipeline:/usr/share/logstash/pipeline"
      - "/etc/localtime:/etc/localtime:ro"
 #     - $WORK_HOME_NGINX/logs/nginx_access_log:/var/log/nginx_access_log
#      - /var/log/messages:/var/log/messages
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  elk_overlay:
    external: true
  database_overlay:
    external: true

配置文件
cat logstash.conf

input {
    redis {
    key => "nginx-log"
    host => "redis"
    password => "Lanbroad202"
    port => 6379
    db => "0"
    data_type => "list"
    type  => "one"
    }
    
    redis {
    key => "nginx-log"
    host => "redis"
    password => "Lanbroad202"
    port => 6379
    db => "0"
    data_type => "list"
    type  => "two"
    }
}

output {
  
  if [type] == "nginx-log" {
      elasticsearch {
      hosts => ["http://elasticsearch:9200"]
      index => "nginx-log-%{+YYYY.MM.dd}"
      } 
  }
 
  if [type] == "system" {
      elasticsearch {
      hosts => ["elasticsearch:9200"]
      index => "systemlog-%{+YYYY.MM.dd}"
      }
  }

  stdout{codec => rubydebug}
}

08api74

6、小结

1、无论采用哪一种结构,启动顺序按照elasticsearch、elastichd、kibana、redis、logstash、filebeat
2、在测试采集是否正常时,可以只开启redis和采集工具,后面的提取工具及存储工具均不开启,查看redis是否有数据
3、本文中所有的日志数据存储在同一key中