MySQL-MMM安裝指南(Multi-Master Replication Manager for MySQL)
最基本的MMM安裝必須至少需要2個數(shù)據(jù)庫服務(wù)器和一個監(jiān)控服務(wù)器下面要配置的MySQL Cluster環(huán)境包含四臺數(shù)據(jù)庫服務(wù)器和一臺監(jiān)控服務(wù)器,如下:
function | ip | hostname | server id |
---|---|---|---|
monitoring host | 192.168.0.10 | mon | - |
master 1 | 192.168.0.11 | db1 | 1 |
master 2 | 192.168.0.12 | db2 | 2 |
slave 1 | 192.168.0.13 | db3 | 3 |
slave 2 | 192.168.0.14 | db4 | 4 |
如果是個人學(xué)習(xí)安裝,一下子找5臺機器不太容易,可以虛擬機就可以完成。
配置完成后,使用下面的虛擬IP訪問MySQL Cluster,他們通過MMM分配到不同的服務(wù)器。
ip | role | description |
---|---|---|
192.168.0.100 | writer | 應(yīng)用程序應(yīng)該連接到這個ip進行寫操作 |
192.168.0.101 | reader | 應(yīng)用程序應(yīng)該鏈接到這些ip中的一個進行讀操作 |
192.168.0.102 | reader | |
192.168.0.103 | reader | |
192.168.0.104 | reader |
結(jié)構(gòu)圖如下:

2. Basic configuration of master 1
First we install MySQL on all hosts:
aptitude install mysql-serverThen we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:
server_id = 1
log_bin = /var/log/mysql/mysql-bin.log
log_bin_index = /var/log/mysql/mysql-bin.log.index
relay_log = /var/log/mysql/mysql-relay-bin
relay_log_index = /var/log/mysql/mysql-relay-bin.index
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
Then remove the following entry:
bind-address = 127.0.0.1Set to number of masters:
auto_increment_increment = 2Set to a unique, incremented number, less than auto_increment_increment, on each server
auto_increment_offset = 1Do not bind of any specific IP, use 0.0.0.0 instead:
bind-address = 0.0.0.0Afterwards we need to restart MySQL for our changes to take effect:
/etc/init.d/mysql restart
3. Create usersNow we can create the required users. We'll need 3 different users:
function | description | privileges |
---|---|---|
monitor user | used by the mmm monitor to check the health of the MySQL servers | REPLICATION CLIENT |
agent user | used by the mmm agent to change read-only mode, replication master, etc. | SUPER, REPLICATION CLIENT, PROCESS |
relication user | used for replication | REPLICATION SLAVE |
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%' IDENTIFIED BY 'agent_password';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';
Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.
Note: Don't use a replication_password longer than 32 characters
4. Synchronisation of data between both databases
I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.
(db1) mysql> FLUSH TABLES WITH READ LOCK;
Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.
(db1) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 374 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:
db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql
Now we can remove the database-lock. Go to the first shell:
(db1) mysql> UNLOCK TABLES;Copy the database backup to db2, db3 and db4.
db1$ scp /tmp/database-backup.sql <user>@192.168.0.12:/tmp
db1$ scp /tmp/database-backup.sql <user>@192.168.0.13:/tmp
db1$ scp /tmp/database-backup.sql <user>@192.168.0.14:/tmp
Then import this into db2, db3 and db4:
db2$ mysql -u root -p < /tmp/database-backup.sql
db3$ mysql -u root -p < /tmp/database-backup.sql
db4$ mysql -u root -p < /tmp/database-backup.sql
Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.
(db2) mysql> FLUSH PRIVILEGES;
(db3) mysql> FLUSH PRIVILEGES;
(db4) mysql> FLUSH PRIVILEGES;
On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can setup replication to keep it that way.
Note: Import just only add records from dump file. You should drop all databases before import dump file.
5. Setup replication
Configure replication on db2, db3 and db4 with the following commands:
(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
(db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
(db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
Please insert the values return by “show master status” on db1 at the <file> and <position> tags.
Start the slave-process on all 3 hosts:
(db2) mysql> START SLAVE;
(db3) mysql> START SLAVE;
(db4) mysql> START SLAVE;
Now check if the replication is running correctly on all hosts:
(db2) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.11
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
…
(db3) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.11
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
…
(db4) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.11
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
…
Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:
(db2) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 | 98 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Now we configure replication on db1 with the following command:
(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication',
master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
Now insert the values return by “show master status” on db2 at the <file> and <position> tags.
Start the slave-process:
(db1) mysql> START SLAVE;Now check if the replication is running correctly on db1:
(db1) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.12
Master_User: <replication>
Master_Port: 3306
Connect_Retry: 60
…
Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.
6. Install MMM
Create user
Optional: Create user that will be the owner of the MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.
useradd --comment "MMM Script owner" --shell /sbin/nologin mmmdMonitoring host
First install dependencies:
aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:
dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb
Database hosts
On Ubuntu First install dependencies:
aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perlThen fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:
dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.debOn RedHat
yum install -y mysql-mmm-agentThis will take care of all the dependencies, which may include:
Installed:
mysql-mmm-agent.noarch 0:2.2.1-1.el5
Dependency Installed:
libart_lgpl.x86_64 0:2.3.17-4
mysql-mmm.noarch 0:2.2.1-1.el5
perl-Algorithm-Diff.noarch 0:1.1902-2.el5
perl-DBD-mysql.x86_64 0:4.008-1.rf
perl-DateManip.noarch 0:5.44-1.2.1
perl-IPC-Shareable.noarch 0:0.60-3.el5
perl-Log-Dispatch.noarch 0:2.20-1.el5
perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5
perl-Log-Log4perl.noarch 0:1.13-2.el5
perl-MIME-Lite.noarch 0:3.01-5.el5
perl-Mail-Sender.noarch 0:0.8.13-2.el5.1
perl-Mail-Sendmail.noarch 0:0.79-9.el5.1
perl-MailTools.noarch 0:1.77-1.el5
perl-Net-ARP.x86_64 0:1.0.6-2.1.el5
perl-Params-Validate.x86_64 0:0.88-3.el5
perl-Proc-Daemon.noarch 0:0.03-1.el5
perl-TimeDate.noarch 1:1.16-5.el5
perl-XML-DOM.noarch 0:1.44-2.el5
perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1
perl-XML-RegExp.noarch 0:0.03-2.el5
rrdtool.x86_64 0:1.2.27-3.el5
rrdtool-perl.x86_64 0:1.2.27-3.el5
Configure MMM
All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:
active_master_role writer
<host default>
cluster_interface eth0
pid_path /var/run/mmmd_agent.pid
bin_path /usr/lib/mysql-mmm/
replication_user replication
replication_password replication_password
agent_user mmm_agent
agent_password agent_password
</host>
<host db1>
ip 192.168.0.11
mode master
peer db2
</host>
<host db2>
ip 192.168.0.12
mode master
peer db1
</host>
<host db3>
ip 192.168.0.13
mode slave
</host>
<host db4>
ip 192.168.0.14
mode slave
</host>
<role writer>
hosts db1, db2
ips 192.168.0.100
mode exclusive
</role>
<role reader>
hosts db1, db2, db3, db4
ips 192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104
mode balanced
</role>
Don't forget to copy this file to all other hosts (including the monitoring host).
On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:
include mmm_common.conf
this db1
On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mmmd_mon.pid
bin_path /usr/lib/mysql-mmm/
status_path /var/lib/misc/mmmd_mon.status
ping_ips 192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password monitor_password
</host>
debug 0
ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.
7. Start MMM
Start the agents
(On the database hosts)
Debian/Ubuntu
Edit /etc/default/mysql-mmm-agent to enable the agent:
ENABLED=1Red Hat
RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:
chkconfig mysql-mmm-agent onThen start it:
/etc/init.d/mysql-mmm-agent startStart the monitor
(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:
ENABLED=1Then start it:
/etc/init.d/mysql-mmm-monitor start
Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:
mon$ mmm_control show
db1(192.168.0.11) master/AWAITING_RECOVERY. Roles:
db2(192.168.0.12) master/AWAITING_RECOVERY. Roles:
db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles:
db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles:
Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:
mon$ tail /var/log/mysql-mmm/mmm_mon.warn
…
2009/10/28 23:15:28 WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online.
2009/10/28 23:15:28 WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online.
2009/10/28 23:15:28 WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online.
2009/10/28 23:15:28 WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.
Now we set or hosts online (db1 first, because the slaves replicate from this host):
mon$ mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db3
OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db4
OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles!
參考:http://mysql-mmm.org/mmm2:guide
- mysql容器之間的replication配置實例詳解
- MySQL-group-replication 配置步驟(推薦)
- MySQL5.6 Replication主從復(fù)制(讀寫分離) 配置完整版
- MySQL Semisynchronous Replication介紹
- MySQL 5.7增強版Semisync Replication性能優(yōu)化
- 基于mysql replication的問題總結(jié)
- mysql5.5 master-slave(Replication)配置方法
- mysql5.5 master-slave(Replication)主從配置
- MySQL主從復(fù)制之半同步semi-sync?replication
相關(guān)文章
mysql中count(), group by, order by使用詳解
mysql中order by 排序查詢、asc升序、desc降序,group by 分組查詢、having 只能用于group by子句、作用于組內(nèi),having條件子句可以直接跟函數(shù)表達式。使用group by 子句的查詢語句需要使用聚合函數(shù)。2017-05-05MySQL數(shù)據(jù)庫表的合并與分區(qū)實現(xiàn)介紹
今天我們來聊聊處理大數(shù)據(jù)時Mysql的存儲優(yōu)化。當(dāng)數(shù)據(jù)達到一定量時,一般的存儲方式就無法解決高并發(fā)問題了。最直接的MySQL優(yōu)化就是分區(qū)分表,以下是我個人對分區(qū)分表的筆記2022-09-09將MySQL的表數(shù)據(jù)全量導(dǎo)入clichhouse庫中
這篇文章主要介紹了將MySQL的表數(shù)據(jù)全量導(dǎo)入clichhouse庫中,詳細(xì)介紹全量導(dǎo)出MySQL數(shù)據(jù)到clickhouse表的相關(guān)內(nèi)容,需要的小伙伴可以參考一下2022-03-03