欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Redis shake實(shí)現(xiàn)可視化監(jiān)控的示例代碼

 更新時間:2024年03月29日 08:49:22   作者:真雪  
Redis可視化監(jiān)控是通過監(jiān)控Redis服務(wù)器的各項(xiàng)指標(biāo)和狀態(tài),并將其以可視化的方式展示給用戶,本文給大家介紹了Redis shake實(shí)現(xiàn)可視化監(jiān)控,并通過代碼示例講解的非常詳細(xì),需要的朋友可以參考下

一.redis-shake v4

1.鏡像

######################### Dockerfile ########################################
FROM centos:7
 
WORKDIR /opt
COPY shake.toml /tmp/
COPY redis-shake /opt/
COPY entrypoint.sh /usr/local/bin/
RUN  chmod +x redis-shake  &&  chmod +x /usr/local/bin/entrypoint.sh
EXPOSE 8888
ENTRYPOINT ["entrypoint.sh"]
 
######################### entrypoint.sh ######################################
#!/bin/bash
set -e
 
eval "cat <<EOF
 $(< /tmp/shake.toml)
EOF
"  > /opt/shake.toml
/opt/redis-shake /opt/shake.toml
exit 0

2.shake.toml

status_port = 8888 獲取監(jiān)控數(shù)據(jù)端口,部署啟動時映射8888端口

function = ""
 
########## 過濾key #########################################
#function """
#local prefix = "user:"
#local prefix_len = #prefix
#if string.sub(KEYS[1], 1, prefix_len) ~= prefix then
#  return
#end
#shake.call(DB, ARGV)
#"""
 
[sync_reader]
cluster = ${SOURCE_IF_CLUSTER}  # set to true if source is a redis cluster
address = ${SOURCE_ADDRESS}     # when cluster is true, set address to one of the cluster node
password = ${SOURCE_PASSWORD}   # keep empty if no authentication is required
sync_rdb = ${SYNC_RDB} # set to false if you don't want to sync rdb true全量同步 false不全量同步
sync_aof = ${SYNC_AOF} # set to false if you don't want to sync aof true 增量同步 false不增量同步
prefer_replica = true # set to true if you want to sync from replica node
dbs = []           # set you want to scan dbs such as [1,5,7], if you don't want to scan all
tls = false
# username = ""              # keep empty if not using ACL
# ksn = false         # set to true to enabled Redis keyspace notifications (KSN) subscription
 
[redis_writer]
cluster = ${TARGET_IF_CLUSTER}   # set to true if target is a redis cluster
address = ${TARGET_ADDRESS}      # when cluster is true, set address to one of the cluster node
password = ${TARGET_PASSWORD}    # keep empty if no authentication is required
tls = false
off_reply = false       # ture off the server reply
# username = ""         # keep empty if not using ACL
 
[advanced]
dir = "data"
ncpu = 0        # runtime.GOMAXPROCS, 0 means use runtime.NumCPU() cpu cores
# pprof_port = 8856  # pprof port, 0 means disable
status_port = 8888 # status port, 0 means disable
 
# log
log_file = "shake.log"
log_level = "info"     # debug, info or warn
log_interval = 5       # in seconds
 
# redis-shake gets key and value from rdb file, and uses RESTORE command to
# create the key in target redis. Redis RESTORE will return a "Target key name
# is busy" error when key already exists. You can use this configuration item
# to change the default behavior of restore:
# panic:   redis-shake will stop when meet "Target key name is busy" error.
# rewrite: redis-shake will replace the key with new value.
# ignore:  redis-shake will skip restore the key when meet "Target key name is busy" error.
rdb_restore_command_behavior = ${RESTORE_BEHAVIOR} # panic, rewrite or ignore
 
# redis-shake uses pipeline to improve sending performance.
# This item limits the maximum number of commands in a pipeline.
pipeline_count_limit = 1024
 
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default. This amount is normally 1gb.
target_redis_client_max_querybuf_len = 1024_000_000
 
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited to 512 mb.
target_redis_proto_max_bulk_len = 512_000_000
 
# If the source is Elasticache or MemoryDB, you can set this item.
aws_psync = "" # example: aws_psync = "10.0.0.1:6379@nmfu2sl5osync,10.0.0.1:6379@xhma21xfkssync"
 
# destination will delete itself entire database before fetching files
# from source during full synchronization.
# This option is similar redis replicas RDB diskless load option:
#   repl-diskless-load on-empty-db
empty_db_before_sync = false
 
[module]
# The data format for BF.LOADCHUNK is not compatible in different versions. v2.6.3 <=> 20603
target_mbbloom_version = 20603

3.啟動redis-shake后

可部署多個 redis-shake 10.111.11.12:8888  10.111.11.12:8889 10.111.11.12:8890

{"start_time":"2024-02-02 16:13:07","consistent":true,"total_entries_count":{"read_count":77403368,"read_ops":0,"write_count":77403368,"write_ops":0},"per_cmd_entries_count":{"APPEND":{"read_count":2,"read_ops":0,"write_count":2,"write_ops":0},"DEL":{"read_count":5,"read_ops":0,"write_count":5,"write_ops":0},"HMSET":{"read_count":2,"read_ops":0,"write_count":2,"write_ops":0},"PEXPIRE":{"read_count":8,"read_ops":0,"write_count":8,"write_ops":0},"RESTORE":{"read_count":77403341,"read_ops":0,"write_count":77403341,"write_ops":0},"SADD":{"read_count":1,"read_ops":0,"write_count":1,"write_ops":0},"SCRIPT-LOAD":{"read_count":7,"read_ops":0,"write_count":7,"write_ops":0},"SET":{"read_count":2,"read_ops":0,"write_count":2,"write_ops":0}},"reader":[{"name":"reader_10.127.11.11_9984","address":"10.127.11.11:9984","dir":"/opt/data/reader_10.172.48.17_9984","status":"syncing aof","rdb_file_size_bytes":867659640,"rdb_file_size_human":"828 MiB","rdb_received_bytes":867659640,"rdb_received_human":"828 MiB","rdb_sent_bytes":867659640,"rdb_sent_human":"828 MiB","aof_received_offset":567794044,"aof_sent_offset":567794044,"aof_received_bytes":6614445,"aof_received_human":"6.3 MiB"},{"name":"reader_10.127.11.12_9984","address":"10.127.11.12:9984","dir":"/opt/data/reader_10.172.48.16_9984","status":"syncing aof","rdb_file_size_bytes":867824091,"rdb_file_size_human":"828 MiB","rdb_received_bytes":867824091,"rdb_received_human":"828 MiB","rdb_sent_bytes":867824091,"rdb_sent_human":"828 MiB","aof_received_offset":564917306,"aof_sent_offset":564917306,"aof_received_bytes":6612502,"aof_received_human":"6.3 MiB"},{"name":"reader_10.127.11.13_9984","address":"10.127.11.13:9984","dir":"/opt/data/reader_10.172.48.15_9984","status":"syncing aof","rdb_file_size_bytes":867661773,"rdb_file_size_human":"828 MiB","rdb_received_bytes":867661773,"rdb_received_human":"828 MiB","rdb_sent_bytes":867661773,"rdb_sent_human":"828 MiB","aof_received_offset":562834707,"aof_sent_offset":562834707,"aof_received_bytes":6615286,"aof_received_human":"6.3 MiB"}],"writer":[{"name":"writer_10.127.12.11_9984","unanswered_bytes":0,"unanswered_entries":0},{"name":"writer_10.127.12.12_9984","unanswered_bytes":0,"unanswered_entries":0},{"name":"writer_10.127.12.13_9984","unanswered_bytes":0,"unanswered_entries":0}]}

二.json-exporter配置

1.Dockerfile

FROM prometheuscommunity/json-exporter:latest
 
USER root
RUN mkdir -p  /opt
WORKDIR /opt
COPY  config.yml /opt/

2.config.yml

根據(jù)上邊返回的json數(shù)據(jù),制定自己需要的監(jiān)控模版,部署json-exporter 10.111.11.11:7979

modules:
  default:
    headers:
      X-Dummy: my-test-header
    metrics:
    - name: shake_consistent
      help: Example of sub-level value scrapes from a json
      path: '{.consistent}'
      labels:
        start_time: '{.start_time}'
    - name: shake_total_entries_count
      type: object
      help: Example of sub-level value scrapes from a json
      path: '{.total_entries_count}'
      values:
        read_count: '{.read_count}'     # static value
        read_ops: '{.read_ops}' # dynamic value
        write_count: '{.write_count}'
        write_ops: '{.write_ops}'
    - name: shake_per_cmd_entries_count_restore
      type: object
      help: Example of sub-level value scrapes from a json
      path: "{.per_cmd_entries_count.RESTORE}"
      values:
        read_count: '{.read_count}'
        read_ops: '{.read_ops}'
        write_count: '{.write_count}'
        write_ops: '{.write_ops}'
    - name: shake_per_cmd_entries_script_load
      type: object
      help: Example of sub-level value scrapes from a json
      path: "{.per_cmd_entries_count.SCRIPT-LOAD}"
      values:
        read_count: '{.read_count}'
        read_ops: '{.read_ops}'
        write_count: '{.write_count}'
        write_ops: '{.write_ops}'
    - name: shake_reader
      type: object
      help: Example of sub-level value scrapes from a json
      path: "{.reader}"
      labels:
        address: '{.address}'          # dynamic label
        dir: '{.dir}'
        status: '{.status}'
      values:
        rdb_file_size_bytes: '{.rdb_file_size_bytes}'
        rdb_received_bytes: '{.rdb_received_bytes}'
        rdb_sent_bytes: '{.rdb_sent_bytes}'
        aof_received_offset: '{.aof_received_offset}'
        aof_sent_offset: '{.aof_sent_offset}'
        aof_received_bytes: '{.aof_received_bytes}'
    - name: shake_writer
      type: object
      help: Example of sub-level value scrapes from a json
      path: "{.writer}"
      labels:
        name: '{.name}'          # dynamic label
      values:
        unanswered_bytes: '{.unanswered_bytes}'
        unanswered_entries: '{.unanswered_entries}'

三.prometheus配置

1.prometheus.yml

global:
  scrape_interval: 15s 
  evaluation_interval: 15s
 
scrape_configs:
  - job_name: json_exporter
    metrics_path: /probe
    file_sd_configs:
    - files:
      - 'redis-shake.json'
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 10.111.11.11:7979 # json-exporter地址

2.redis-shake.json

單獨(dú)的文件配置可實(shí)現(xiàn)動態(tài)加載,同時可添加自定義的標(biāo)簽在文件中

[
# labels為自定義的標(biāo)簽,targets為部署各個redis-shake地址
{"labels": {"env-1":"團(tuán)隊(duì)1"},"targets": ["http://10.111.11.12:8888"]},
{"labels": {"env-1":"團(tuán)隊(duì)2"},"targets": ["http://10.111.11.12:8889"]},
{"labels": {"env-1":"團(tuán)隊(duì)3"},"targets": ["http://10.111.11.12:8890"]}
]

四.grafana

上邊的都配置好,把自己的peometheus數(shù)據(jù)源添加到grafana中,就可以設(shè)置自己想要的監(jiān)控界面了

到此這篇關(guān)于Redis shake實(shí)現(xiàn)可視化監(jiān)控的示例代碼的文章就介紹到這了,更多相關(guān)Redis shake可視化監(jiān)控內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • Redis高并發(fā)超賣問題解決方案圖文詳解

    Redis高并發(fā)超賣問題解決方案圖文詳解

    Redis是一種基于內(nèi)存的數(shù)據(jù)存儲系統(tǒng),被廣泛用于解決高并發(fā)問題,下面這篇文章主要給大家介紹了關(guān)于Redis高并發(fā)超賣問題解決方案的相關(guān)資料,文中通過代碼介紹的非常詳細(xì),需要的朋友可以參考下
    2024-02-02
  • Redis異常測試盤點(diǎn)分析

    Redis異常測試盤點(diǎn)分析

    這篇文章主要為大家介紹了Redis異常測試盤點(diǎn)分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-05-05
  • Redis優(yōu)惠券秒殺解決方案

    Redis優(yōu)惠券秒殺解決方案

    這篇文章主要介紹了Redis解決優(yōu)惠券秒殺應(yīng)用案例,本文先講了搶購問題,指出其中會出現(xiàn)的多線程問題,提出解決方案采用悲觀鎖和樂觀鎖兩種方式進(jìn)行實(shí)現(xiàn),然后發(fā)現(xiàn)在搶購過程中容易出現(xiàn)一人多單現(xiàn)象,需要的朋友可以參考下
    2022-12-12
  • Redisson分布式限流器RRateLimiter的使用及原理小結(jié)

    Redisson分布式限流器RRateLimiter的使用及原理小結(jié)

    本文主要介紹了Redisson分布式限流器RRateLimiter的使用及原理小結(jié),文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2024-06-06
  • Redis實(shí)現(xiàn)延遲隊(duì)列的項(xiàng)目示例

    Redis實(shí)現(xiàn)延遲隊(duì)列的項(xiàng)目示例

    延遲隊(duì)列是Redis的一個重要應(yīng)用場景,本文主要介紹了Redis實(shí)現(xiàn)延遲隊(duì)列的項(xiàng)目示例,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2024-06-06
  • redis實(shí)現(xiàn)主從模式(1主2從)

    redis實(shí)現(xiàn)主從模式(1主2從)

    本文主要介紹了在Windows環(huán)境下搭建和測試Redis的主從復(fù)制模式,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2024-12-12
  • Redis做數(shù)據(jù)持久化的解決方案及底層原理

    Redis做數(shù)據(jù)持久化的解決方案及底層原理

    Redis有兩種方式來實(shí)現(xiàn)數(shù)據(jù)的持久化,分別是RDB(Redis Database)和AOF(Append Only File),今天通過本文給大家聊一聊Redis做數(shù)據(jù)持久化的解決方案及底層原理,感興趣的朋友一起看看吧
    2021-07-07
  • Jackson2JsonRedisSerializer和GenericJackson2JsonRedisSerializer區(qū)別

    Jackson2JsonRedisSerializer和GenericJackson2JsonRedisSerializ

    本文主要介紹了Jackson2JsonRedisSerializer和GenericJackson2JsonRedisSerializer區(qū)別,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2023-04-04
  • redis配置文件redis.conf中文版(基于2.4)

    redis配置文件redis.conf中文版(基于2.4)

    這篇文章主要介紹了redis配置文件redis.conf中文版(基于2.4),對英文不好的朋友是非常好的參考,需要的朋友可以參考下
    2014-06-06
  • Redisson?框架中的分布式鎖實(shí)現(xiàn)方法

    Redisson?框架中的分布式鎖實(shí)現(xiàn)方法

    這篇文章主要介紹了Redisson?框架中的分布式鎖,實(shí)現(xiàn)分布式鎖通常有三種方式:數(shù)據(jù)庫、Redis 和 Zookeeper,我們比較常用的是通過 Redis 和 Zookeeper 實(shí)現(xiàn)分布式鎖,需要的朋友可以參考下
    2024-03-03

最新評論