docker搭建logstash和使用方法詳解
配置logstash
查詢下載鏡像【固定和elasticsearch一樣的版本】
[root@hao ~]# docker search logstash NAME DESCRIPTION STARS OFFICIAL AUTOMATED logstash Logstash is a tool for managing events and l… 2165 [OK] opensearchproject/logstash-oss-with-opensearch-output-plugin The Official Docker Image of Logstash with O… 19 grafana/logstash-output-loki Logstash plugin to send logs to Loki 3 bitnami/logstash 6 bitnami/logstash-exporter-archived A copy of the container images of the deprec… 0 rancher/logstash-config 2 bitnamicharts/logstash 0 dtagdevsec/logstash T-Pot Logstash 4 [OK] malcolmnetsec/logstash-oss Logstash data processing pipeline, as used b… 1 itzg/logstash Logstash with the ability to groom its own E… 2 [OK] uselagoon/logstash-7 0 uselagoon/logstash-6 0 jhipster/jhipster-logstash Logstash image (based on the official image)… 5 [OK] itzg/logback-kafka-relay Receives remote logback events, sends them t… 0 sequra/logstash_exporter Prometheus exporter for the metrics availabl… 3 bonniernews/logstash_exporter Prometheus exporter for Logstash 5.0+ 3 [OK] monsantoco/logstash Logstash Docker image based on Alpine Linux … 9 [OK] elastic/logstash The Logstash docker images maintained by Ela… 27 komljen/logstash Logstash kube image 0 [OK] geoint/logstash-elastic-ha Logstash container for ElasticSearch forward… 2 [OK] datasense/logstash_indexer Logstash + crond curator 0 mantika/logstash-dynamodb-streams Logstash image which includes dynamodb plugi… 4 [OK] digitalwonderland/logstash-forwarder Docker Logstash Integration - run once per D… 14 [OK] cfcommunity/logstash https://github.com/cloudfoundry-community/lo… 0 vungle/logstash-kafka-es A simple Logstash image to ship json logs fr… 1 [OK] [root@hao ~]# docker pull logstash:7.17.7 7.17.7: Pulling from library/logstash fb0b3276a519: Already exists 4a9a59914a22: Pull complete 5b31ddf2ac4e: Pull complete 162661d00d08: Pull complete 706a1bf2d5e3: Pull complete 741874f127b9: Pull complete d03492354dd2: Pull complete a5245bb90f80: Pull complete 05103a3b7940: Pull complete 815ba6161ff7: Pull complete 7777f80b5df4: Pull complete Digest: sha256:93030161613312c65d84fb2ace25654badbb935604a545df91d2e93e28511bca Status: Downloaded newer image for logstash:7.17.7 docker.io/library/logstash:7.17.7
準(zhǔn)備工作
建立文件夾,給data文件夾777權(quán)限
[root@hao /usr/local/software/elk/logstash]# ll 總用量 0 drwxrwsr-x. 2 root root 66 12月 6 10:12 config drwxrwxrwx. 4 root root 69 12月 6 10:18 data
只需要建logstash.yml、pipelines.yml、logstash.conf文件
[root@hao /usr/local/software/elk/logstash]# tree
.
├── config
│ ├── jvm.options
│ ├── logstash.yml
│ └── pipelines.yml
├── data
│ ├── dead_letter_queue
│ ├── queue
│ └── uuid
└── pipeline
└── logstash.conf
5 directories, 5 files內(nèi)容分別為
path.logs: /usr/share/logstash/logs config.test_and_exit: false config.reload.automatic: false http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://192.168.133.100:9200" ]
# This file is where you define your pipelines. You can define multiple. # # For more information on multiple pipelines, see the documentation: # # https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html # - pipeline.id: main path.config: "/usr/share/logstash/pipeline/logstash.conf"
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5044
codec => json_lines
}
}
filter{
}
output {
elasticsearch {
hosts => ["192.168.133.100:9200"] #elasticsearch的ip地址
index => "elk_logstash" #索引名稱
}
stdout { codec => rubydebug }
}創(chuàng)建容器
docker run -it \ --name logstash \ --privileged \ -p 5044:5044 \ -p 9600:9600 \ --network wn_docker_net \ --ip 172.18.12.72 \ -v /etc/localtime:/etc/localtime \ -v /usr/local/software/elk/logstash/config:/usr/share/logstash/config \ -v /usr/local/software/elk/logstash/pipeline:/usr/share/logstash/pipeline \ -v /usr/local/software/elk/logstash/data:/usr/share/logstash/data \ -d logstash:7.17.7
查看日志是否啟動(dòng)成功,沒(méi)報(bào)錯(cuò)就可以
SpringBoot整合logstash
引入依賴
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.3</version>
</dependency>配置spring-logback.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<!-- 日志級(jí)別從低到高分為T(mén)RACE < DEBUG < INFO < WARN < ERROR < FATAL,如果設(shè)置為WARN,則低于WARN的信息都不會(huì)輸出 -->
<!-- scan:當(dāng)此屬性設(shè)置為true時(shí),配置文檔如果發(fā)生改變,將會(huì)被重新加載,默認(rèn)值為true -->
<!-- scanPeriod:設(shè)置監(jiān)測(cè)配置文檔是否有修改的時(shí)間間隔,如果沒(méi)有給出時(shí)間單位,默認(rèn)單位是毫秒。
當(dāng)scan為true時(shí),此屬性生效。默認(rèn)的時(shí)間間隔為1分鐘。 -->
<!-- debug:當(dāng)此屬性設(shè)置為true時(shí),將打印出logback內(nèi)部日志信息,實(shí)時(shí)查看logback運(yùn)行狀態(tài)。默認(rèn)值為false。 -->
<configuration scan="true" scanPeriod="10 seconds">
<!--1. 輸出到控制臺(tái)-->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!--此日志appender是為開(kāi)發(fā)使用,只配置最低級(jí)別,控制臺(tái)輸出的日志級(jí)別是大于或等于此級(jí)別的日志信息-->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} -%5level ---[%15.15thread] %-40.40logger{39} : %msg%n</pattern>
<!-- 設(shè)置字符集 -->
<charset>UTF-8</charset>
</encoder>
</appender>
<!-- 2. 輸出到文件 -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--日志文檔輸出格式-->
<append>true</append>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} -%5level ---[%15.15thread] %-40.40logger{39} : %msg%n</pattern>
<charset>UTF-8</charset> <!-- 此處設(shè)置字符集 -->
</encoder>
</appender>
<!--LOGSTASH config -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>192.168.133.100:5044</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
<!--自定義時(shí)間戳格式, 默認(rèn)是yyyy-MM-dd'T'HH:mm:ss.SSS<-->
<timestampPattern>yyyy-MM-dd HH:mm:ss</timestampPattern>
<customFields>{"appname":"App"}</customFields>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
<appender-ref ref="LOGSTASH"/>
</root>
</configuration>主要配置35行的ip地址和端口
使日志插入logstash
只需要使用lombok依賴的@Slf4j注解,把要放入日志的東西加進(jìn)去即可
package com.wnhz.smart.es.controller;
import com.wnhz.smart.common.http.ResponseResult;
import com.wnhz.smart.es.doc.BookTabDoc;
import com.wnhz.smart.es.service.IBookTabDocService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
/**
* @author Hao
* @date 2023-12-06 10:40
*/
@RestController
@RequestMapping("/api/query")
@Slf4j
public class QueryController {
@Autowired
private IBookTabDocService iBookTabDocService;
@GetMapping("/test")
public ResponseResult<List<BookTabDoc>> test() {
List<BookTabDoc> allBooks = iBookTabDocService.getAllBooks();
log.debug("從es中查詢到的所有數(shù)據(jù):{}", allBooks.subList(0, 1000));
return ResponseResult.ok(allBooks.subList(0, 3));
}
}這樣所有的數(shù)據(jù)就會(huì)自動(dòng)插入logstash 配置kibana 進(jìn)入網(wǎng)頁(yè)http://192.168.133.100:5601/app/dev_tools#/console,創(chuàng)建索引 進(jìn)入http://192.168.133.100:5601/app/management(原網(wǎng)頁(yè)點(diǎn)擊Stack Management),點(diǎn)擊index Patterns創(chuàng)建匹配模式,輸入logstash.conf文件中的index后面的名字,這里是elk_logstash

查詢方法:message 內(nèi)容
日志的條數(shù)查詢錯(cuò)誤解決 當(dāng)日志的條數(shù)太多會(huì)出現(xiàn)下面的錯(cuò)誤警告
The length [1417761] of field [message] in doc[20]/index[elk_logstash] exceeds the [index.highlight.max_analyzed_offset] limit [1000000]. To avoid this error, set the query parameter [max_analyzed_offset] to a value less than index setting [1000000] and this will tolerate long field values by truncating them.
解決方法
解決方案,使用任意一個(gè)可以put http值和參數(shù)的工具,對(duì)目標(biāo)主機(jī)上部署的es進(jìn)行put命令配置:
?。?!注意是put請(qǐng)求,請(qǐng)求地址和body參數(shù)分別為:
http://localhost:9200/_all/_settings?preserve_existing=true
{
"index.highlight.max_analyzed_offset" : "999999999"
}返回結(jié)果這樣就是成功了
{
"acknowledged": true
}數(shù)的工具,對(duì)目標(biāo)主機(jī)上部署的es進(jìn)行put命令配置:
!??!注意是put請(qǐng)求,請(qǐng)求地址和body參數(shù)分別為:
http://localhost:9200/_all/_settings?preserve_existing=true
{
"index.highlight.max_analyzed_offset" : "999999999"
}返回結(jié)果這樣就是成功了
{
"acknowledged": true
}到此這篇關(guān)于docker搭建logstash和使用方法的文章就介紹到這了,更多相關(guān)docker搭建logstash內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
如何修改Docker部署gitlab的外部訪問(wèn)地址和端口
這篇文章主要介紹了如何修改Docker部署gitlab的外部訪問(wèn)地址和端口問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2024-05-05
docker中鏡像映射和端口映射的實(shí)現(xiàn)步驟
本文介紹了在Docker中進(jìn)行鏡像映射和端口映射,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2024-11-11
docker-compose基于MySQL8部署項(xiàng)目的實(shí)現(xiàn)
這篇文章主要介紹了docker-compose基于MySQL8部署項(xiàng)目的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2021-03-03
Docker在CentOS7下不能下載鏡像timeout的解決辦法(圖解)
本文給大家記錄下Docker在CentOS7下不能下載鏡像timeout問(wèn)題的解決方法,非常不錯(cuò),具有參考借鑒價(jià)值,感興趣的朋友一起看看吧2016-11-11
docker?build與Dockerfile問(wèn)題
這篇文章主要介紹了docker?build與Dockerfile問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-03-03
修改Docker容器內(nèi)文件的三種實(shí)現(xiàn)方式
這篇文章主要介紹了修改Docker容器內(nèi)文件的三種實(shí)現(xiàn)方式,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2024-08-08
Docker Swarm實(shí)現(xiàn)服務(wù)的滾動(dòng)更新的示例代碼
這篇文章主要介紹了Docker Swarm實(shí)現(xiàn)服務(wù)的滾動(dòng)更新的示例代碼,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2020-04-04

