欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

用Docker swarm快速部署Nebula Graph集群的教程

 更新時間:2020年09月27日 08:52:28   作者:二十四橋明月夜33  
這篇文章主要介紹了用Docker swarm快速部署Nebula Graph集群的方法,本文給大家介紹的非常詳細,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下

一、前言

本文介紹如何使用 Docker Swarm 來部署 Nebula Graph 集群。

二、nebula集群搭建

2.1 環(huán)境準(zhǔn)備

機器準(zhǔn)備

ip

內(nèi)存(Gb)

cpu(核數(shù))

192.168.1.166

16

4

192.168.1.167

16

4

192.168.1.168

16

4

在安裝前確保所有機器已安裝docker

2.2 初始化swarm集群

在192.168.1.166機器上執(zhí)行

$ docker swarm init --advertise-addr 192.168.1.166
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
To add a worker to this swarm, run the following command:
 docker swarm join \
 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
 192.168.1.166:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

2.3 加入worker節(jié)點

根據(jù)init命令提示內(nèi)容,加入swarm worker節(jié)點,在192.168.1.167 192.168.1.168分別執(zhí)行

docker swarm join \
 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
 192.168.1.166:2377

2.4 驗證集群

docker node ls
 
ID       HOSTNAME   STATUS    AVAILABILITY  MANAGER STATUS  ENGINE VERSION
h0az2wzqetpwhl9ybu76yxaen * KF2-DATA-166  Ready    Active    Reachable   18.06.1-ce
q6jripaolxsl7xqv3cmv5pxji  KF2-DATA-167  Ready    Active    Leader    18.06.1-ce
h1iql1uvm7123h3gon9so69dy  KF2-DATA-168  Ready    Active         18.06.1-ce

2.5 配置docker stack

vi docker-stack.yml

配置如下內(nèi)容

 version: '3.6'
 services:
  metad0:
  image: vesoft/nebula-metad:nightly
  env_file:
   - ./nebula.env
  command:
   - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
   - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
  constraints:
   - node.hostname == KF2-DATA-166
 healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
 ports:
  - target: 11000
   published: 11000
   protocol: tcp
  mode: host
  - target: 11002
   published: 11002
   protocol: tcp
  mode: host
  - target: 45500
   published: 45500
  protocol: tcp
   mode: host
 volumes:
  - data-metad0:/data/meta
  - logs-metad0:/logs
 networks:
  - nebula-net
 
 metad1:
 image: vesoft/nebula-metad:nightly
  env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
 deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
 start_period: 20s
  ports:
  - target: 11000
   published: 11000
  protocol: tcp
   mode: host
  - target: 11002
   published: 11002
  protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
 volumes:
  - data-metad1:/data/meta
  - logs-metad1:/logs
 networks:
  - nebula-net

 metad2:
  image: vesoft/nebula-metad:nightly
 env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 11000
   published: 11000
   protocol: tcp
   mode: host
  - target: 11002
   published: 11002
   protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
  volumes:
  - data-metad2:/data/meta
  - logs-metad2:/logs
  networks:
  - nebula-net
 
 storaged0:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12002
   protocol: tcp
   mode: host
  volumes:
  - data-storaged0:/data/storage
  - logs-storaged0:/logs
  networks:
  - nebula-net
 storaged1:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12004
   protocol: tcp
   mode: host
  volumes:
  - data-storaged1:/data/storage
  - logs-storaged1:/logs
  networks:
  - nebula-net
 
 storaged2:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12006
   protocol: tcp
   mode: host
  volumes:
  - data-storaged2:/data/storage
  - logs-storaged2:/logs
  networks:
  - nebula-net
 graphd1:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.166
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:13000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3699
   protocol: tcp
   mode: host
  - target: 13000
   published: 13000
   protocol: tcp
 #  mode: host
  - target: 13002
   published: 13002
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd:/logs
  networks:
  - nebula-net
 
 graphd2:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.167
  - --log_dir=/logs
  - --v=2
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:13001/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3640
   protocol: tcp
   mode: host
  - target: 13000
   published: 13001
   protocol: tcp
   mode: host
  - target: 13002
   published: 13003
   protocol: tcp
 #  mode: host
  volumes:
  - logs-graphd2:/logs
  networks:
  - nebula-net
 graphd3:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.168
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:13002/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3641
   protocol: tcp
   mode: host
  - target: 13000
   published: 13002
   protocol: tcp
 #  mode: host
  - target: 13002
   published: 13004
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd3:/logs
  networks:
  - nebula-net
 networks:
 nebula-net:
  external: true
  attachable: true
  name: host
 volumes:
 data-metad0:
 logs-metad0:
 data-metad1:
 logs-metad1:
 data-metad2:
 logs-metad2:
 data-storaged0:
 logs-storaged0:
 data-storaged1:
 logs-storaged1:
 data-storaged2:
 logs-storaged2:
 logs-graphd:
 logs-graphd2:
 logs-graphd3:
docker-stack.yml

編輯 nebula.env

加入如下內(nèi)容

 TZ=UTC
USER=root

nebula.env

2.6 啟動nebula集群

docker stack deploy nebula -c docker-stack.yml

三、集群負(fù)載均衡及高可用配置

Nebula Graph的客戶端目前(1.X)沒有提供負(fù)載均衡的能力,只是隨機選一個graphd去連接。所以生產(chǎn)使用的時候要自己做個負(fù)載均衡和高可用。

圖3.1

將整個部署架構(gòu)分為三層,數(shù)據(jù)服務(wù)層,負(fù)載均衡層及高可用層。如圖3.1所示

負(fù)載均衡層:對client請求做負(fù)載均衡,將請求分發(fā)至下方數(shù)據(jù)服務(wù)層

高可用層: 這里實現(xiàn)的是haproxy的高可用,保證負(fù)載均衡層的服務(wù)從而保證整個集群的正常服務(wù)

3.1 負(fù)載均衡配置

haproxy使用docker-compose配置。分別編輯以下三個文件

Dockerfile 加入以下內(nèi)容

FROM haproxy:1.7
 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EXPOSE 3640

Dockerfile

docker-compose.yml加入以下內(nèi)容

 version: "3.2"
 services:
 haproxy:
  container_name: haproxy
  build: .
  volumes:
  - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
  ports:
  - 3640:3640
  restart: always
  networks:
  - app_net
 networks:
 app_net:
  external: true

docker-compose.yml

haproxy.cfg加入以下內(nèi)容

global
  daemon
  maxconn 30000
  log 127.0.0.1 local0 info
 log 127.0.0.1 local1 warning

 defaults
 log-format %hr\ %ST\ %B\ %Ts
 log global
  mode http
  option http-keep-alive
  timeout connect 5000ms
  timeout client 10000ms
  timeout server 50000ms
  timeout http-request 20000ms
 
 # custom your own frontends && backends && listen conf
 #CUSTOM
 
 listen graphd-cluster
  bind *:3640
  mode tcp
  maxconn 300
  balance roundrobin
  server server1 192.168.1.166:3699 maxconn 300 check
  server server2 192.168.1.167:3699 maxconn 300 check
  server server3 192.168.1.168:3699 maxconn 300 check
 
 listen stats
  bind *:1080
  stats refresh 30s
  stats uri /stats

3.2 啟動haproxy

docker-compose up -d

3.2 高可用配置

注:配置keepalive需預(yù)先準(zhǔn)備好vip (虛擬ip),在以下配置中192.168.1.99便為虛擬ip

在192.168.1.166 、192.168.1.167、192.168.1.168上均做以下配置

安裝keepalived

apt-get update && apt-get upgrade && apt-get install keepalived -y

更改keepalived配置文件/etc/keepalived/keepalived.conf(三臺機器中 做如下配置,priority應(yīng)設(shè)置不同值確定優(yōu)先級)

192.168.1.166機器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state MASTER
  interface ens160
  virtual_router_id 52
  priority 999
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗證類型和密碼
  authentication {
  # 設(shè)置驗證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens169:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

167機器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 888
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗證類型和密碼
  authentication {
  # 設(shè)置驗證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

168機器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 777
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗證類型和密碼
  authentication {
  # 設(shè)置驗證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

keepalived相關(guān)命令

# 啟動keepalived
systemctl start keepalived
# 使keepalived開機自啟
systemctl enable keeplived
# 重啟keepalived
systemctl restart keepalived

四、其他

離線怎么部署?把鏡像更改為私有鏡像庫就成了,有問題歡迎來勾搭啊。

到此這篇關(guān)于用Docker swarm快速部署Nebula Graph集群的文章就介紹到這了,更多相關(guān)Docker 部署Nebula Graph集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • docker compose入門helloworld的詳細過程

    docker compose入門helloworld的詳細過程

    docker-compose是基于docker的,所以我們需要先安裝docker才能使用docker-compose,接下來通過本文給大家介紹docker compose入門helloworld的過程,一起看看吧
    2021-09-09
  • Docker鏡像與容器的導(dǎo)入導(dǎo)出以及常用命令總結(jié)

    Docker鏡像與容器的導(dǎo)入導(dǎo)出以及常用命令總結(jié)

    Docker是一個開源的容器引擎,基于go語言開發(fā)并遵循了apache2.0協(xié)議開源,下面這篇文章主要給大家介紹了關(guān)于Docker鏡像與容器的導(dǎo)入導(dǎo)出以及常用命令總結(jié)的相關(guān)資料,文中通過實例代碼介紹的非常詳細,需要的朋友可以參考下
    2022-08-08
  • docker部署mysql 實現(xiàn)遠程連接的示例代碼

    docker部署mysql 實現(xiàn)遠程連接的示例代碼

    這篇文章主要介紹了docker部署mysql 實現(xiàn)遠程連接的示例代碼,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2019-09-09
  • Docker新手初探之常用命令實踐記錄

    Docker新手初探之常用命令實踐記錄

    這篇文章主要給大家介紹了關(guān)于Docker新手初探之常用命令實踐的相關(guān)資料,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面來一起學(xué)習(xí)學(xué)習(xí)吧
    2020-08-08
  • Docker在Windows系統(tǒng)中的安裝和使用方法詳解

    Docker在Windows系統(tǒng)中的安裝和使用方法詳解

    這篇文章主要給大家介紹了關(guān)于Docker在Windows系統(tǒng)中的安裝和使用方法,Docker允許開發(fā)中將應(yīng)用、依賴、函數(shù)庫、配置一起打包,形成可移植鏡像Docker應(yīng)用運行在容器中,需要的朋友可以參考下
    2023-09-09
  • docker imageid 和 digest區(qū)別解析

    docker imageid 和 digest區(qū)別解析

    在Docker中,image ID和digest是兩個不同的標(biāo)識符,用于唯一標(biāo)識和引用Docker鏡像的不同方面,這篇文章主要介紹了docker imageid 和 digest區(qū)別,需要的朋友可以參考下
    2023-06-06
  • Docker中Cgroup資源配置的實現(xiàn)

    Docker中Cgroup資源配置的實現(xiàn)

    Cgroup不僅可以限制被namespace?隔離起來的資源,還可以為資源設(shè)置權(quán)重、計算使用量、操控進程啟停等,本文主要介紹了Docker中Cgroup資源配置的實現(xiàn),感興趣的可以了解一下
    2023-09-09
  • docker安裝influxdb的詳細教程(性能測試)

    docker安裝influxdb的詳細教程(性能測試)

    這篇文章主要介紹了docker安裝influxdb的詳細教程,本文通過圖文并茂的形式給大家介紹的非常詳細,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2020-07-07
  • 基于docker的?nacos安裝部署過程

    基于docker的?nacos安裝部署過程

    這篇文章主要介紹了基于docker的?nacos安裝部署,本文給大家介紹的非常詳細,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友參考下吧
    2024-08-08
  • 使用docker安裝elk的詳細步驟

    使用docker安裝elk的詳細步驟

    這篇文章主要介紹了使用docker安裝elk,本文通過實例代碼給大家介紹的非常詳細,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2022-08-08

最新評論