欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

docker?compose部署mongodb?分片集群的操作方法

 更新時間:2024年10月16日 10:55:30   作者:梔夏613  
分片機制(Sharding)是MongoDB中用于處理大規(guī)模數(shù)據(jù)集和高負載應(yīng)用的一種數(shù)據(jù)分布策略,通過將數(shù)據(jù)均勻分布在多個服務(wù)器上,分片技術(shù)能夠提高應(yīng)用的可擴展性和性能,本文給大家介紹docker?compose部署mongodb?分片集群的相關(guān)操作,感興趣的朋友一起看看吧

分片機制

分片概念

分片(sharding)是指將數(shù)據(jù)庫拆分,將其分散在不同的機器上的過程。將數(shù)據(jù)分散到不同的機器上,不需要功能強大的服務(wù)器就可以存儲更多的數(shù)據(jù)和處理更大的負載。基本思想就是將集合切成小塊,這些塊分散到若干片里,每個片只負責(zé)總數(shù)據(jù)的一部分,最后通過一個均衡器來對各個分片進行均衡(數(shù)據(jù)遷移)。通過一個名為mongos的路由進程進行操作,mongos知道數(shù)據(jù)和片的對應(yīng)關(guān)系(通過配置服務(wù)器)。大部分使用場景都是解決磁盤空間的問題,對于寫入有可能跨分片,查詢則盡量避免跨分片查詢。

mongodb分片的主要使用場景:

  • 數(shù)據(jù)量過大,單機磁盤空間不足;
  • 單個mongod不能滿足寫數(shù)據(jù)的性能要求,需要通過分片讓寫壓力分散到各個分片上面;
  • 把大量數(shù)據(jù)放到內(nèi)存里提高性能,通過分片使用分片服務(wù)器自身的資源。

mongodb分片優(yōu)勢**:**

減少單個分片需要處理的請求數(shù),提高集群的存儲容量和吞吐量 比如,當(dāng)插入一條數(shù)據(jù)時,應(yīng)用只需要訪問存儲這條數(shù)據(jù)的分片 減少單分片存儲的數(shù)據(jù),提高數(shù)據(jù)可用性,提高大型數(shù)據(jù)庫查詢服務(wù)的性能。 當(dāng)MongoDB單點數(shù)據(jù)庫服務(wù)器存儲和性能成為瓶頸,或者需要部署大型應(yīng)用以充分利用內(nèi)存時,可以使用分片技術(shù)

分片集群架構(gòu)

組件說明:

  • **Config Server:配置服務(wù)器,**存儲了整個 分片群集的配置信息,其中包括 chunk信息。
  • **Shard:分片服務(wù)器,**用于存儲實際的數(shù)據(jù)塊,每一個shard都負責(zé)存儲集群中的一部分?jǐn)?shù)據(jù)。例如一個集群有3個分片,假設(shè)定義分片的規(guī)則為hash,那么整個集群的數(shù)據(jù)會按照相應(yīng)規(guī)劃分割到3個分片當(dāng)中。任意一個分片掛掉,則整個集群數(shù)據(jù)不可用。所以在實際生產(chǎn)環(huán)境中一個shard server角色一般由一個3節(jié)點的replicaSet承擔(dān),防止分片的單點故障。
  • **mongos:前端路由,**整個集群的入口??蛻舳藨?yīng)用通過mongos連接到整個集群,mongos讓整個集群看上去像單一數(shù)據(jù)庫,客戶端應(yīng)用可以透明使用

整個mongo分片集群的功能:

  • 請求分流:通過路由節(jié)點將請求分發(fā)到對應(yīng)的分片和塊中
  • 數(shù)據(jù)分流:內(nèi)部提供平衡器保證數(shù)據(jù)的均勻分布,這是數(shù)據(jù)平均分布式、請求平均分布的前提
  • 塊的拆分:mongodb的單個chunk的最大容量為64M或者10w的數(shù)據(jù),當(dāng)?shù)竭_這個閾值,觸發(fā)塊的拆分,一分為二
  • 塊的遷移:為保證數(shù)據(jù)在分片節(jié)點服務(wù)器分片節(jié)點服務(wù)器均勻分布,塊會在節(jié)點之間遷移。一般相差8個分塊的時候觸發(fā)

分片集群部署

部署規(guī)劃

shard 3 個副本集
config server 3 個副本集
mongos 3 個副本集

主機準(zhǔn)備

shard

IProleportshardname
192.168.142.157shard127181shard1
192.168.142.157shard227182shard1
192.168.142.157shard327183shard1
192.168.142.155shard127181shard2
192.168.142.155shard227182shard2
192.168.142.155shard327183shard2
192.168.142.156shard127181shard3
192.168.142.156shard227182shard3
192.168.142.156shard327183shard3

config server

IProleportconfig name
192.168.142.157config server127281config1
192.168.142.157config server227282config1
192.168.142.157config server327283config1
192.168.142.155config server127281config2
192.168.142.155config server227282config2
192.168.142.155config server327283config2
192.168.142.156config server127281config3
192.168.142.156config server227282config3
192.168.142.156config server327283config3

mongos

IProleport
192.168.142.155mongos27381
192.168.142.155mongos27382
192.168.142.155mongos27383

開始部署

創(chuàng)建搭建分片集群的文件夾

mkdir /docker/mongo-zone/{configsvr,shard,mongos} -p

進入 /docker/mongo-zone/ 文件夾

configsvr 副本集文件夾準(zhǔn)備

mkdir configsvr/{configsvr1,configsvr2,configsvr3}/{data,logs} -p

shard 副本集文件夾準(zhǔn)備

mkdir shard/{shard1,shard2,shard3}/{data,logs} -p

mongos 副本集文件夾準(zhǔn)備

mkdir mongos/{mongos1,mongos2,mongos3}/{data,logs} -p

生成密鑰

openssl rand -base64 756 > mongo.key

發(fā)放給其他主機

scp mongo.key slave@192.168.142.156:/home/slave
scp mongo.key slave02@192.168.142.155:/home/slave02
mv /home/slave02/mongo.key .mv /home/slave/mongo.key .
chown root:root mongo.key

搭建 shard 副本集

cd /docker/mongo-zone/shard/shard1

docker-compose.yml

services:
  mongo-shard1:
    image: mongo:7.0
    container_name: mongo-shard1
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard1/data:/data/db
      - /docker/mongo-zone/shard/shard1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27181:27181"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27181
  mongo-shard2:
    image: mongo:7.0
    container_name: mongo-shard2
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard2/data:/data/db
      - /docker/mongo-zone/shard/shard2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27182:27182"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27182
  mongo-shard3:
    image: mongo:7.0
    container_name: mongo-shard3
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard3/data:/data/db
      - /docker/mongo-zone/shard/shard3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27183:27183"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27183

其他三臺主機的操作和上面一樣,參考上面表格
修改 docker-compose.yml 三處地方即可

MONGO_INITDB_REPLICA_SET_NAME
–replSet

初始化副本集

docker exec -it mongo-shard1 mongosh --port 27181
use admin
rs.initiate()

添加 root 用戶

db.createUser({user:"root",pwd:"123456",roles:[{role:"root",db:"admin"}]})

登錄 root 用戶

db.auth("root","123456")

添加其他節(jié)點

rs.add({host:"192.168.142.155:27182",priority:2})
rs.add({host:"192.168.142.155:27183",priority:3})

查看集群狀態(tài)

rs.status()
{
  set: 'shard1',
  date: ISODate('2024-10-15T03:25:48.706Z'),
  myState: 1,
  term: Long('2'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    lastCommittedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    appliedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    durableOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
    lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1728962730, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'priorityTakeover',
    lastElectionDate: ISODate('2024-10-15T03:21:50.316Z'),
    electionTerm: Long('2'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
    numVotesNeeded: 2,
    priorityAtElection: 2,
    electionTimeoutMillis: Long('10000'),
    priorPrimaryMemberId: 0,
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-10-15T03:21:50.320Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-10-15T03:21:50.327Z')
  },
  members: [
    {
      _id: 0,
      name: '4590140ce686:27181',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 250,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastHeartbeat: ISODate('2024-10-15T03:25:47.403Z'),
      lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.403Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27182',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    },
    {
      _id: 1,
      name: '192.168.142.157:27182',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 435,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1728962510, i: 1 }),
      electionDate: ISODate('2024-10-15T03:21:50.000Z'),
      configVersion: 5,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: '192.168.142.157:27183',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 7,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastHeartbeat: ISODate('2024-10-15T03:25:47.405Z'),
      lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.906Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27182',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    }
  ],
  ok: 1
}

搭建 config server 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

services:
  mongo-config1:
    image: mongo:7.0
    container_name: mongo-config1
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr1/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27281:27281"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27281
  mongo-config2:
    image: mongo:7.0
    container_name: mongo-config2
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr2/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27282:27282"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27282
  mongo-config3:
    image: mongo:7.0
    container_name: mongo-config3
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr3/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27283:27283"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27283

搭建 mongos 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

services:
  mongo-mongos1:
    image: mongo:7.0
    container_name: mongo-mongos1
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos1/data:/data/db
      - /docker/mongo-zone/mongos/mongos1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27381:27381"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config1/192.168.142.157:27281,192.168.142.157:27282,192.168.142.157:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27381
  mongo-mongos2:
    image: mongo:7.0
    container_name: mongo-mongos2
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos2/data:/data/db
      - /docker/mongo-zone/mongos/mongos2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27382:27382"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config2/192.168.142.155:27281,192.168.142.155:27282,192.168.142.155:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27382
  mongo-mongos3:
    image: mongo:7.0
    container_name: mongo-mongos3
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos3/data:/data/db
      - /docker/mongo-zone/mongos/mongos3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27383:27383"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config3/192.168.142.156:27281,192.168.142.156:27282,192.168.142.156:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27383

它不再需要單獨生成密鑰,將 config server 的密鑰文件拷貝過來即可,切記一定要使用 config server 的密鑰文件,不然會登錄不進去

docker exec -it mongo-mongos1 mongosh --port 27381 -u root -p 123456 --authenticationDatabase admin
use admin

沒有用戶就照上面的方法再創(chuàng)建一個

db.auth("root","123456")

添加分片

sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
sh.addShard("shard3/192.168.142.156:27181,192.168.142.156:27182,192.168.142.156:27183")
sh.addShard("shard2/192.168.142.155:27181,192.168.142.155:27182,192.168.142.155:27183")

此時此刻,可能會報錯 找不到 192.168.142.157:27181 主機 不在 shard1
可是它明明就在 shard1 里面

[direct: mongos] admin> sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
MongoServerError[OperationFailed]: in seed list shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183, host 192.168.142.157:27181 does not belong to replica set shard1; found { compression: [ "snappy", "zstd", "zlib" ], topologyVersion: { processId: ObjectId('670e225373d36364f75d8336'), counter: 7 }, hosts: [ "b170b4e78bc6:27181", "192.168.142.157:27182", "192.168.142.157:27183" ], setName: "shard1", setVersion: 5, isWritablePrimary: true, secondary: false, primary: "192.168.142.157:27183", me: "192.168.142.157:27183", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1728984093, 1), t: 3 }, lastWriteDate: new Date(1728984093000), majorityOpTime: { ts: Timestamp(1728984093, 1), t: 3 }, majorityWriteDate: new Date(1728984093000) }, isImplicitDefaultMajorityWC: true, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1728984102377), logicalSessionTimeoutMinutes: 30, connectionId: 57, minWireVersion: 0, maxWireVersion: 21, readOnly: false, ok: 1.0, $clusterTime: { clusterTime: Timestamp(1728984093, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configTime: Timestamp(0, 1), $topologyTime: Timestamp(0, 1), operationTime: Timestamp(1728984093, 1) }

原來問題出在

那么這個時候,要么使用這一串不知名的東西,要么就改這個節(jié)點的名字
實現(xiàn)方式比較簡單,就是先移除這個節(jié)點,再重新添加,我省事就不贅述了

重新添加

sh.addShard("shard1/b170b4e78bc6:27181,192.168.142.157:27182,192.168.142.157:27183")
sh.addShard("shard3/cbfa7ed4415f:27181,192.168.142.156:27182,192.168.142.156:27183")
sh.addShard("shard2/444e6ad7d88c:27181,192.168.142.155:27182,192.168.142.155:27183")

查看 分片狀態(tài)

sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }
---
shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]
---
active mongoses
[ { '7.0.14': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
  'Currently enabled': 'yes',
  'Currently running': 'no',
  'Failed balancer rounds in last 5 attempts': 0,
  'Migration Results for the last 24 hours': 'No recent migrations'
}
---
databases
[
  {
    database: { _id: 'config', primary: 'config', partitioned: true },
    collections: {
      'config.system.sessions': {
        shardKey: { _id: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }
        ],
        tags: []
      }
    }
  }
]

著重查看

shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]

節(jié)點都齊全就表示分片搭建完成

驗證

數(shù)據(jù)庫分片配置

注意: 這些操作都在 mongos 上執(zhí)行

use test

對數(shù)據(jù)庫啟動分片

sh.enableSharding("test")

返回結(jié)果

{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985516, i: 9 }),
    signature: {
      hash: Binary.createFromBase64('QWe6Dj8TwrM1aVVHmnOtihKsFm0=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985516, i: 3 })
}

對test庫的test集合的_id進行哈希分片

sh.enableBalancing("test.test")

返回結(jié)果

_id_hashed

sh.shardCollection("test.test", {"_id": "hashed" })

{
  collectionsharded: 'test.test',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985594, i: 48 }),
    signature: {
      hash: Binary.createFromBase64('SqkMn9xNXjnsNfNd4WTFiHajLPc=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985594, i: 48 })
}

讓當(dāng)前分片支持平衡

sh.enableBalancing("test.test")
{
  acknowledged: true,
  insertedId: null,
  matchedCount: 1,
  modifiedCount: 0,
  upsertedCount: 0
}

開啟平衡

sh.startBalancer()
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985656, i: 4 }),
    signature: {
      hash: Binary.createFromBase64('jTVkQGDtAHtLTjhZkBc3CQx+tzM=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985656, i: 4 })
}

創(chuàng)建用戶
就在 test 庫下

db.createUser({user:"shardtest",pwd:"shardtest",roles:[{role:'dbOwner',db:'test'}]})

插入數(shù)據(jù)測試

for (i = 1; i <= 300; i=i+1){db.test.insertOne({'name': "test"})}

查看詳細分片信息

查看詳細分片信息

結(jié)果

shardingVersion
{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }
---
shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]
---
active mongoses
[
  {
    _id: '3158a5543d69:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.663Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.345Z'),
    up: Long('2891'),
    waiting: true
  },
  {
    _id: 'c5a08ca76189:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.647Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.119Z'),
    up: Long('2891'),
    waiting: true
  },
  {
    _id: '5bb8b2925f52:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.445Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.075Z'),
    up: Long('2891'),
    waiting: true
  }
]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
  'Currently enabled': 'yes',
  'Currently running': 'no',
  'Failed balancer rounds in last 5 attempts': 0,
  'Migration Results for the last 24 hours': 'No recent migrations'
}
---
databases
[
  {
    database: { _id: 'config', primary: 'config', partitioned: true },
    collections: {
      'config.system.sessions': {
        shardKey: { _id: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }
        ],
        tags: []
      }
    }
  },
  {
    database: {
      _id: 'test',
      primary: 'shard2',
      partitioned: false,
      version: {
        uuid: UUID('3b193276-e88e-42e1-b053-bcb61068a865'),
        timestamp: Timestamp({ t: 1728985516, i: 1 }),
        lastMod: 1
      }
    },
    collections: {
      'test.test': {
        shardKey: { _id: 'hashed' },
        unique: false,
        balancing: true,
        chunkMetadata: [
          { shard: 'shard1', nChunks: 2 },
          { shard: 'shard2', nChunks: 2 },
          { shard: 'shard3', nChunks: 2 }
        ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },
          { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },
          { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },
          { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },
          { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
          { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
        ],
        tags: []
      }
    }
  }
]

重點查看

chunks: [
          { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },
          { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },
          { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },
          { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },
          { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
          { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
        ],

我們可以清晰的看到 shard1 shard2 shard3

查看該表分片數(shù)據(jù)信息

db.test.getShardDistribution()
Shard shard2 at shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181
{
  data: '3KiB',
  docs: 108,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 54
}
---
Shard shard1 at shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181
{
  data: '3KiB',
  docs: 89,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 44
}
---
Shard shard3 at shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181
{
  data: '3KiB',
  docs: 103,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 51
}
---
Totals
{
  data: '10KiB',
  docs: 300,
  chunks: 6,
  'Shard shard2': [ '36 % data', '36 % docs in cluster', '37B avg obj size on shard' ],
  'Shard shard1': [
    '29.66 % data',
    '29.66 % docs in cluster',
    '37B avg obj size on shard'
  ],
  'Shard shard3': [
    '34.33 % data',
    '34.33 % docs in cluster',
    '37B avg obj size on shard'
  ]
}

我們可以看到 三個 shard 都平均分了這個些數(shù)據(jù)

查看sharding狀態(tài)

db.printShardingStatus()

關(guān)閉集合分片

sh.disableBalancing("test.test")

結(jié)果

{
  acknowledged: true,
  insertedId: null,
  matchedCount: 1,
  modifiedCount: 1,
  upsertedCount: 0
}

到此這篇關(guān)于docker compose部署mongodb 分片集群的文章就介紹到這了,更多相關(guān)docker compose mongodb 分片集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • 對Docker-java項目進行jvm調(diào)優(yōu)-內(nèi)存方式

    對Docker-java項目進行jvm調(diào)優(yōu)-內(nèi)存方式

    本文詳細介紹了如何進入Docker容器并分析Java進程的內(nèi)存使用情況,通過使用jps和jstat工具,可以查看java進程列表及內(nèi)存池容量,討論了設(shè)置JVM參數(shù)-Xmx和-Xms相等的重要性,以避免堆內(nèi)存的頻繁調(diào)整,此外,還探討了FullGC觸發(fā)條件和元空間的配置
    2024-09-09
  • docker-compose啟動minio方式

    docker-compose啟動minio方式

    文章介紹了創(chuàng)建文件夾、配置docker-compose.yml、開啟防火墻以及驗證服務(wù)的步驟,適用于新版和老版的RELEASE數(shù)據(jù)格式
    2025-01-01
  • 使用docker搭建gitlab并且開放到公網(wǎng)

    使用docker搭建gitlab并且開放到公網(wǎng)

    這篇文章主要介紹了使用docker搭建gitlab并且開放到公網(wǎng),具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2024-05-05
  • docker image刪不掉的解決辦法

    docker image刪不掉的解決辦法

    在使用Docker的時候遇到刪不掉image的情況,怎么回事,如何解決呢?下面小編給大家分享下docker image刪不掉的解決辦法,需要的朋友參考下吧
    2017-01-01
  • Docker 批量刪除容器或鏡像的操作方法

    Docker 批量刪除容器或鏡像的操作方法

    這篇文章主要介紹了Docker 批量刪除容器或鏡像的操作方法,本文通過實例代碼給大家介紹的非常詳細,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2023-03-03
  • docker-compose之基本語法解讀

    docker-compose之基本語法解讀

    這篇文章主要介紹了docker-compose之基本語法解讀,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教
    2023-03-03
  • Docker容器沒有權(quán)限寫入宿主機目錄的解決方案

    Docker容器沒有權(quán)限寫入宿主機目錄的解決方案

    這篇文章主要介紹了Docker容器沒有權(quán)限寫入宿主機目錄的解決方案,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2021-03-03
  • 使用Docker部署Spring Boot的方法示例

    使用Docker部署Spring Boot的方法示例

    這篇文章主要介紹了使用Docker部署Spring Boot的方法示例,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2019-03-03
  • 詳解在Ubuntu 14.04安裝和使用Docker

    詳解在Ubuntu 14.04安裝和使用Docker

    Docker是一個開源軟件,它可以把一個Linux應(yīng)用和它所依賴的一切(比如配置文件)都封裝到一個容器。本篇文章主要介紹了在Ubuntu 14.04安裝和使用Docker,非常具有實用價值,需要的朋友可以參考下。
    2016-12-12
  • Docker中Redis數(shù)據(jù)遷移到本地的實現(xiàn)

    Docker中Redis數(shù)據(jù)遷移到本地的實現(xiàn)

    Redis數(shù)據(jù)庫之間的遷移是指將數(shù)據(jù)從一個Redis實例復(fù)制到另一個Redis實例的過程,本文主要介紹了Docker中Redis數(shù)據(jù)遷移到本地的實現(xiàn),具有一定的參考價值,感興趣的可以了解一下
    2025-04-04

最新評論