欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

docker部署tig監(jiān)控服務(wù)教程

 更新時(shí)間:2024年12月19日 11:10:36   作者:Mars'Ares  
本文介紹了使用TIG(Telegraf-InfluxDB-Grafana)架構(gòu)進(jìn)行服務(wù)監(jiān)控的方法,通過Docker Compose快速搭建該監(jiān)控體系,包括Telegraf、InfluxDB和Grafana的安裝和配置,Telegraf用于數(shù)據(jù)采集,InfluxDB用于存儲數(shù)據(jù),Grafana用于數(shù)據(jù)可視化,通過配置文件和官方文檔

前言

tig對應(yīng)的服務(wù)是influxdb grafana telegraf

此架構(gòu)比傳統(tǒng)的promethus架構(gòu)更為簡潔,雖然influxdb開源方案沒有集群部署,但是對于中小型服務(wù)監(jiān)控需求該方案簡單高效

本文以docker-compose來演示這套監(jiān)控體系的快速搭建和效果。

部署

docker-compose.yaml

version: '3'
networks:
  monitor:
    driver: bridge
#配置應(yīng)用
services:
  #grafana 報(bào)警推送 
  #賬號密碼 prometheusalert prometheusalert
  prometheusalert:
    image: feiyu563/prometheus-alert
    container_name:   prometheusalert
    hostname:   prometheusalert
    restart: always
    ports:
      - 8087:8080
    networks:
      - monitor
    volumes:
      - ./docker/prometheusalert/conf:/app/conf
      - ./docker/prometheusalert/db:/app/db
    environment:
      - PA_LOGIN_USER=prometheusalert
      - PA_LOGIN_PASSWORD=prometheusalert
      - PA_TITLE=PrometheusAlert
      - PA_OPEN_FEISHU=1
  #界面展示 默認(rèn)賬號密碼 admin admin
  grafana:
    image: grafana/grafana
    container_name: grafana
    hostname: grafana
    restart: always
    volumes:
      - ./docker/grafana/data/grafana:/var/lib/grafana
    ports:
      - "3000:3000"
    networks:
      - monitor
     
  #influxdb數(shù)據(jù)庫v2自帶管理端
  #賬號密碼 root root
  influxdb:
    image: influxdb
    container_name: influxdb
    environment:
      INFLUX_DB: test                   # 可能無效
      INFLUXDB_USER: root               # 可能無效
      INFLUXDB_USER_PASSWORD: root      # 可能無效
    ports:
      - "8086:8086"
    restart: always
    volumes:
      - ./docker/influxdb/:/var/lib/influxdb
    networks:
      - monitor
  #indluxdb數(shù)據(jù)庫v1
  #influxdb1x: 
  #  image: influxdb:1.8
  #  container_name: influxdb1.8
  #  environment:
  #    INFLUXDB_DB: test
  #    INFLUXDB_ADMIN_ENABLED: true
  #    INFLUXDB_ADMIN_USER: root
  #    INFLUXDB_ADMIN_PASSWORD: root
  #  ports:
  #    - "8098:8086"
  #  restart: always
  #  volumes:
  #    - ./docker/influxdb1x/influxdb1x.conf:/etc/influxdb/influxdb.conf
  #    - ./docker/influxdb1x/:/var/lib/influxdb
  #  networks:
  #    - monitor

telegraf 安裝 官方文檔

# telegraf是采集端,部署于監(jiān)控?cái)?shù)據(jù)的源頭,詳細(xì)的部署教程可以通過官網(wǎng),下面以linux服務(wù)器為例子
# 編寫源
cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF

# 安裝
sudo yum install telegraf

# 校驗(yàn)
telegraf --help 

使用

1.登陸influxdb http://localhost:8086.

  • 首次登陸會創(chuàng)建賬號
  • org是分區(qū)意思
  • buk是庫的概念

2.配置telegraf采集etl流程

官方telegraf采集插件介紹

  • 配置influxdb數(shù)據(jù)的訪問的token
  • 配置telegraf采集的配置文件
  • 采集端啟動telegraf采集etl

a. 配置token

b. 可以通過平臺生成配置文件,也可以自己保存配置文件。

平臺生成配置則提供http接口遠(yuǎn)程提供配置下載

以nginx文件為例提供配置

telegraf.conf

# Configuration for telegraf agent 
# telegraf 采集端配置都是默認(rèn)配置
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## Maximum number of unwritten metrics per output.  Increasing this value
  ## allows for longer periods of output downtime without dropping metrics at the
  ## cost of higher maximum memory usage.
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "μs"), "ms", "s".
  precision = ""

  ## Log at debug level.
  # debug = false
  ## Log only error level messages.
  # quiet = false

  ## Log target controls the destination for logs and can be one of "file",
  ## "stderr" or, on Windows, "eventlog".  When set to "file", the output file
  ## is determined by the "logfile" setting.
  # logtarget = "file"

  ## Name of the file to be logged to when using the "file" logtarget.  If set to
  ## the empty string then logs are written to stderr.
  # logfile = ""

  ## The logfile will be rotated after the time interval specified.  When set
  ## to 0 no time based rotation is performed.  Logs are rotated only when
  ## written to, if there is no log activity rotation may be delayed.
  # logfile_rotation_interval = "0d"

  ## The logfile will be rotated when it becomes larger than the specified
  ## size.  When set to 0 no size based rotation is performed.
  # logfile_rotation_max_size = "0MB"

  ## Maximum number of rotated archives to keep, any older logs are deleted.
  ## If set to -1, no archives are removed.
  # logfile_rotation_max_archives = 5

  ## Pick a timezone to use when logging or type 'local' for local time.
  ## Example: America/Chicago
  # log_with_timezone = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false

# influxdb_v2 輸出插件配置 這里需要配置的
# 數(shù)據(jù)庫地址:urls 分區(qū):organization 庫:bucket 授權(quán)token:token
[[outputs.influxdb_v2]]
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
  urls = ["http://localhost:8086"] 

  ## Token for authentication.
  token = "上一步創(chuàng)建的token"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "創(chuàng)建的分區(qū) 這里是test"

  ## Destination bucket to write into.
  bucket = "創(chuàng)建的表 這里是test"

  ## The value of this tag will be used to determine the bucket.  If this
  ## tag is not set the 'bucket' option is used as the default.
  # bucket_tag = ""

  ## If true, the bucket tag will not be added to the metric.
  # exclude_bucket_tag = false

  ## Timeout for HTTP messages.
  # timeout = "5s"

  ## Additional HTTP headers
  # http_headers = {"X-Special-Header" = "Special-Value"}

  ## HTTP Proxy override, if unset values the standard proxy environment
  ## variables are consulted to determine which proxy, if any, should be used.
  # http_proxy = "http://corporate.proxy:3128"

  ## HTTP User-Agent
  # user_agent = "telegraf"

  ## Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "gzip"

  ## Enable or disable uint support for writing uints influxdb 2.0.
  # influx_uint_support = false

  ## Optional TLS Config for use on HTTP connections.
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

# Parse the new lines appended to a file
# tail 輸入插件配置,以監(jiān)聽nginx日志為例 需要配置
# 監(jiān)聽文件位置 files 
# nginx行數(shù)據(jù)解析表達(dá)式 grok_patterns 提取監(jiān)控字段,gork表達(dá)式不單獨(dú)說明了
# nginx監(jiān)控?cái)?shù)據(jù)存儲表名 name_override
[[inputs.tail]]
  ## File names or a pattern to tail.
  ## These accept standard unix glob matching rules, but with the addition of
  ## ** as a "super asterisk". ie:
  ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
  ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
  ##   "/var/log/apache.log" -> just tail the apache log file
  ##   "/var/log/log[!1-2]*  -> tail files without 1-2
  ##   "/var/log/log[^1-2]*  -> identical behavior as above
  ## See https://github.com/gobwas/glob for more examples
  ##
  files = ["/logs/nginx/access_main.log"]

  ## Read file from beginning.
  #from_beginning = false

  ## Whether file is a named pipe
  # pipe = false

  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
  # watch_method = "inotify"

  ## Maximum lines of the file to process that have not yet be written by the
  ## output.  For best throughput set based on the number of metrics on each
  ## line and the size of the output's metric_batch_size.
  # max_undelivered_lines = 1000

  ## Character encoding to use when interpreting the file contents.  Invalid
  ## characters are replaced using the unicode replacement character.  When set
  ## to the empty string the data is not decoded to text.
  ##   ex: character_encoding = "utf-8"
  ##       character_encoding = "utf-16le"
  ##       character_encoding = "utf-16be"
  ##       character_encoding = ""
  # character_encoding = ""

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  grok_patterns = ["%{NGINX_ACCESS_LOG}"]
  name_override = "nginx_access_log"
  #grok_custom_pattern_files = []
  #grok_custom_patterns = '''
  #NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user}) [%{HTTPDATE:time_local}] %{BASE10NUM:request_time:float} (-|%{BASE10NUM:upstream_response_time:float}) %{IPORHOST:host} %{QS:request} %{NUMBER:status:int} %{NUMBER:body_bytes_sent:int} %{QS:referrer} %{QS:agent} %{IPORHOST:xforwardedfor}
  #'''
  grok_custom_patterns = '''
  NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user:drop}) \[%{HTTPDATE:ts:ts}\] %{BASE10NUM:request_time:float} %{BASE10NUM:upstream_response_time:float} %{IPORHOST:host:tag} "(?:%{WORD:verb:drop} %{NOTSPACE:request:tag}(?: HTTP/%{NUMBER:http_version:drop})?|%{DATA:rawrequest})" %{NUMBER:status:tag} (?:%{NUMBER:resp_bytes}|-)  %{QS:referrer:drop} %{QS:agent:drop} %{QS:xforwardedfor:drop}
  '''

  grok_timezone = "Local"
  data_format = "grok"

  ## Set the tag that will contain the path of the tailed file. If you don't want this tag, set it to an empty string.
  # path_tag = "path"

  ## multiline parser/codec
  ## https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
  #[inputs.tail.multiline]
    ## The pattern should be a regexp which matches what you believe to be an
	## indicator that the field is part of an event consisting of multiple lines of log data.
    #pattern = "^\s"

    ## This field must be either "previous" or "next".
	## If a line matches the pattern, "previous" indicates that it belongs to the previous line,
	## whereas "next" indicates that the line belongs to the next one.
    #match_which_line = "previous"

    ## The invert_match field can be true or false (defaults to false).
    ## If true, a message not matching the pattern will constitute a match of the multiline
	## filter and the what will be applied. (vice-versa is also true)
    #invert_match = false

    ## After the specified timeout, this plugin sends a multiline event even if no new pattern
	## is found to start a new event. The default timeout is 5s.
    #timeout = 5s

gork表達(dá)式舉例

# nginx 日志格式
'$remote_addr - $remote_user [$time_local] $request_time $upstream_response_time $host "$request" '
                      '$status $body_bytes_sent  "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
 # grok  解析格式              
  NGINX_ACCESS_LOG %{IP:remote_addr} - (-|%{WORD:remote_user:drop}) \[%{HTTPDATE:ts:ts}\] %{BASE10NUM:request_time:float} %{BASE10NUM:upstream_response_time:float} %{IPORHOST:host:tag} "(?:%{WORD:verb:drop} %{NOTSPACE:request:tag}(?: HTTP/%{NUMBER:http_version:drop})?|%{DATA:rawrequest})" %{NUMBER:status:tag} (?:%{NUMBER:resp_bytes}|-)  %{QS:referrer:drop} %{QS:agent:drop} %{QS:xforwardedfor:drop}

# nginx 日志舉例
1.1.1.2 - - [30/Jan/2023:02:27:24 +0000] 0.075 0.075 xxx.xxx.xxx "POST /api/xxx/xxx/xxx HTTP/1.1" 200 69  "https://xxx.xxx.xxx/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148" "1.1.1.1"

# grok 解析變量如下再取變量生成influxdb行協(xié)議
{
  "NGINX_ACCESS_LOG": [
    [
      "1.1.1.2 - - [30/Jan/2023:02:27:24 +0000] 0.075 0.075 prod.webcomicsapp.com "POST /api/xxx/xxx/xxx HTTP/1.1" 200 69  "https://xxx.xxx.xxx/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148" "1.46.138.190""
    ]
  ],
  "remote_addr": [
    [
      "1.1.1.2"
    ]
  ],
  "IPV6": [
    [
      null,
      null
    ]
  ],
  "IPV4": [
    [
      "1.1.1.2",
      null
    ]
  ],
  "remote_user": [
    [
      null
    ]
  ],
  "ts": [
    [
      "30/Jan/2023:02:27:24 +0000"
    ]
  ],
  "MONTHDAY": [
    [
      "30"
    ]
  ],
  "MONTH": [
    [
      "Jan"
    ]
  ],
  "YEAR": [
    [
      "2023"
    ]
  ],
  "TIME": [
    [
      "02:27:24"
    ]
  ],
  "HOUR": [
    [
      "02"
    ]
  ],
  "MINUTE": [
    [
      "27"
    ]
  ],
  "SECOND": [
    [
      "24"
    ]
  ],
  "INT": [
    [
      "+0000"
    ]
  ],
  "request_time": [
    [
      "0.075"
    ]
  ],
  "upstream_response_time": [
    [
      "0.075"
    ]
  ],
  "host": [
    [
      "xxx.xxx.xxx"
    ]
  ],
  "HOSTNAME": [
    [
      "xxx.xxx.xxx"
    ]
  ],
  "IP": [
    [
      null
    ]
  ],
  "verb": [
    [
      "POST"
    ]
  ],
  "request": [
    [
      "/api/xxx/xxx/xxx"
    ]
  ],
  "http_version": [
    [
      "1.1"
    ]
  ],
  "BASE10NUM": [
    [
      "1.1",
      "200",
      "69"
    ]
  ],
  "rawrequest": [
    [
      null
    ]
  ],
  "status": [
    [
      "200"
    ]
  ],
  "resp_bytes": [
    [
      "69"
    ]
  ],
  "referrer": [
    [
      ""https://xxx.xxx.xxx/""
    ]
  ],
  "QUOTEDSTRING": [
    [
      ""https://xxx.xxx.xxx/"",
      ""Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"",
      ""1.1.1.1""
    ]
  ],
  "agent": [
    [
      ""Mozilla/5.0 (iPhone; CPU iPhone OS 16_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148""
    ]
  ],
  "xforwardedfor": [
    [
      ""1.1.1.1""
    ]
  ]
}

啟動telegraf

# 測試啟動
nohup  telegraf --config telegraf.conf --debug
# 退出測試
ctl+c
# 后臺進(jìn)程開啟
nohup  telegraf --config telegraf.conf >/dev/null 2>&1 &
# 關(guān)閉后臺進(jìn)程
ps -aux | grep telegraf
kill -9 '對應(yīng)pid'

3.配置granfa

a. 登陸granfa http://localhost:3000/login admin admin。

首次登陸需要改密碼

b.添加數(shù)據(jù)源

至此tig流轉(zhuǎn)全部完成

總結(jié)

以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。

相關(guān)文章

  • 一篇文章看懂docker?run的使用方法

    一篇文章看懂docker?run的使用方法

    在Docker中,run應(yīng)該是用戶使用最多的命令了,很多人不是很明白run命令的用法,這篇文章主要介紹了關(guān)于docker?run的使用方法,文中通過代碼介紹的非常詳細(xì),需要的朋友可以參考下
    2025-03-03
  • docker安裝sentinel的實(shí)現(xiàn)示例

    docker安裝sentinel的實(shí)現(xiàn)示例

    本文主要介紹了docker安裝sentinel的實(shí)現(xiàn)示例,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2023-12-12
  • Docker+nacos+seata1.3.0安裝與使用配置教程

    Docker+nacos+seata1.3.0安裝與使用配置教程

    這篇文章主要介紹了Docker+nacos+seata1.3.0安裝與使用配置教程,本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下
    2021-07-07
  • 阿里云部署Docker私有鏡像倉庫的實(shí)現(xiàn)步驟

    阿里云部署Docker私有鏡像倉庫的實(shí)現(xiàn)步驟

    本文主要介紹了阿里云部署Docker私有鏡像倉庫的實(shí)現(xiàn)步驟,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2022-04-04
  • docker?build?-t?和?docker?build?-f?區(qū)別解析

    docker?build?-t?和?docker?build?-f?區(qū)別解析

    docker build 是用于構(gòu)建Docker鏡像的命令,它允許你基于一個(gè)Dockerfile來創(chuàng)建一個(gè)鏡像,在 docker build 命令中,有兩個(gè)常用的選項(xiàng) -t 和 -f,它們有不同的作用,這篇文章主要介紹了docker?build?-t?和?docker?build?-f?區(qū)別,需要的朋友可以參考下
    2023-08-08
  • 詳解基于Harbor搭建Docker私有鏡像倉庫

    詳解基于Harbor搭建Docker私有鏡像倉庫

    這篇文章主要介紹了詳解基于Harbor搭建Docker私有鏡像倉庫,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧
    2017-12-12
  • Docker的進(jìn)程和Cgroup概念詳解

    Docker的進(jìn)程和Cgroup概念詳解

    文章主要介紹了容器內(nèi)的進(jìn)程組織和關(guān)系,包括containerd-shim和容器內(nèi)1號進(jìn)程的角色和特點(diǎn),以及信號處理機(jī)制,同時(shí),文章還討論了Cgroup在Linux系統(tǒng)中用于資源管理和控制的機(jī)制,包括CFS調(diào)度算法和Kubernetes中的資源管理,感興趣的朋友跟隨小編一起看看吧
    2025-02-02
  • docker客戶端訪問harbor及安裝配置更新

    docker客戶端訪問harbor及安裝配置更新

    這篇文章主要介紹了docker客戶端訪問harbor以及harbor的安裝配置更新步驟詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2023-12-12
  • docker build鏡像時(shí),無法訪問網(wǎng)絡(luò)問題

    docker build鏡像時(shí),無法訪問網(wǎng)絡(luò)問題

    這篇文章主要介紹了docker build鏡像時(shí),無法訪問網(wǎng)絡(luò)問題,具有很好的參考價(jià)值,希望對大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2024-08-08
  • Docker運(yùn)行鏡像以及退出、刪除容器的實(shí)現(xiàn)方式

    Docker運(yùn)行鏡像以及退出、刪除容器的實(shí)現(xiàn)方式

    這篇文章主要介紹了Docker運(yùn)行鏡像以及退出、刪除容器的實(shí)現(xiàn)方式,具有很好的參考價(jià)值,希望對大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2025-03-03

最新評論