MySQL InnoDB Cluster
En aquest article veurem cóm instalar un cluster MySQL Innodb Community Edition. Aquest cluster acostuma a tenir tirada en entorns corporatius a on s'afavoreix la possibilitat d'btenir suport d'un proveïdor conegut. Comentar que amb aquest mateix motiu, farem ús del producte "de la casa" com a base pel Sistema Operatiu: Oracle Linux.
L'article no té l'objectiu de cobrir la instalació d'Oracle Linux, però comentar que és molt silimar a la instalació de RHEL i en principi els únics requisits són que els hosts tinguin un nom de host i adreça ip pròpia a la mateixa subxarxa per a tots els nodes.
Arquitectura
Oracle ofrereix 3 posibilitats per tal de formar un cluster MySQL:
1. MySQL InnoDB Cluster
Es trcata d'un setup d'alta disponibilidad (HA) basat en MySQL Group Replication amb failover automàtic. Pot ser en forma de single-primary o multi-primary. Es gestiona a través del MySQL Shell AdminAPI i inclou el MySQL Router per a routing automàtic. Tots els nodes que el conformen es despleguen a la mateixa infraestructura.
2. MySQL InnoDB ClusterSet.
És una capa de disaster recovery (DR) que es pot afegir al InnoDB Cluster que coordina múltiples clusters en diferents ubicacions.
3. MySQL InnoDB ReplicaSet.
És una alternativa lleugera al InnoDB Cluster que utilitza la clásica asynchronous replication (no Group Replication) amb un node primari i la resta répliques. Convé destacar que en aquest cas, no es disposa de failover automàtic sino manual.
Nosaltres estem interessats pel InnoDB Cluster, en la forma que es pot veure representada al següent diagrama:

Como podem veure, es basa en els següents components:
- MySQL Servers amb Group Replication
- MySQL Router
- MySQL Shell (que dona accés a l'ús del AdminAPI)
Per a crear-lo, ens caldràn 3 instàncies MySQL (de nom my0[123] i amb adreces ip 192.168.2.8[123]) per a que el cluster tingui la capacitat de perdre un node sense veure's afectat.
Setup. Pas a pas
A continuació anirem instalant els components necessaris per a crear el cluster. Fins que no s'indiqui el contrari, tots els passos de a continuació s'hauràn de fer a cada un dels 3 nodes que formaràn part del cluster.
1. Instalació de MySQL Server i dependències
Aquest component és pròpiament el servidor de MySQL de tota la vida.
Tot i que no és necessaria la seva consulta, durant aquest article anirem posant els enllaços a la documentació que hem anat revisant al respecte a temps de realitzar aquest setup:
- 7.4 Deploying a Production InnoDB Cluster
- 6.2 Installing AdminAPI Software Components
- Chapter 2 Installing MySQL
Per a fer la instalació mostrarem els 2 modes principals per a realitzar-la, i un cop exposats, cada un ha d'escollir el mode que més li convingui:
1.1 Fent ús de repositoris YUM
Aquest mètode és el més cómode al estar la paqueteria gestionada pel propi repositori. Nosaltres fem ús d'aquest, i adicionalment imaginarem que volem instalar una versió concreta (la 8.0):
$ yum update
$ wget https://dev.mysql.com/get/mysql84-community-release-el8-2.noarch.rpm
$ rpm -Uvh mysql84-community-release-el8-2.noarch.rpm
Warning: native mysql package from platform vendor seems to be enabled.
Please consider to disable this before installing packages from repo.mysql.com.
Run: yum module -y disable mysql
$ yum module -y disable mysql
$ yum repolist all | grep mysql
$ dnf config-manager --disable mysql-8.4-lts-community
$ dnf config-manager --disable mysql-tools-8.4-lts-community
$ dnf config-manager --enable mysql80-community
$ dnf config-manager --enable mysql-tools-community
$ yum update
$ yum repolist enabled | grep mysql
mysql-connectors-community MySQL Connectors Community
mysql-tools-community MySQL Tools Community
mysql80-community MySQL 8.0 Community Server
#####################$ yum install mysql80-community # No, això és un repo, treure
$ yum install mysql-community-server
$ systemctl start mysqld
$ systemctl status mysqld
Busquem el password temporal que ens estableix la instalació i el canviem (sino igualment, més endavant ens veurem en la necessitat de fer-ho):
$ grep 'temporary password' /var/log/mysqld.log 2025-11-23T18:02:21.077354Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: r5+JjRwt>-a6 $ mysql -uroot -p mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MYpass2025!'; mysql> FLUSH PRIVILEGES; mysql> FLUSH TABLES; mysql> quit
1.2 Utilizant paquets RPM sueltos
Aquest altre mètode és útil si tenim en compte que la versió comercial (de pagament) funciona interactuant directament amb paquets sueltos, així que resulta interessant per simular exactament el mateix escenari que ens trobariem si estiguèssim instalant aquesta versió.
Els paquets s'han de baixar des d'aquí. A tenir en compte el següent avís:
Dependency relationships exist among some of the packages. If you plan to install many of the packages,
you may wish to download the RPM bundle tar file instead, which contains all the RPM packages listed
above, so that you need not download them separately.
In most cases, you need to install the mysql-community-server, mysql-community-client,
mysql-community-client-plugins, mysql-community-libs, mysql-community-icu-data-files,
mysql-community-common, and mysql-community-libs-compat packages to get a functional, standard MySQL
installation. To perform such a standard, basic installation, go to the folder that contains all those
packages (and, preferably, no other RPM packages with similar names), and issue the following command:
$> sudo yum install mysql-community-{server,client,client-plugins,icu-data-files,common,libs}-*
Installation of previous versions of MySQL using older packages might have created a configuration file named /usr/my.cnf. It is highly recommended that you examine the contents of the file and migrate the desired settings inside to the file /etc/my.cnf file, then remove /usr/my.cnf.
$> systemctl start mysqld
Instalem els paquets i de mateix mode que abans, haurem de cercar i canviar el password temporal que ens posa la instalació.
2. Instalació de MySQL Shell
Aquest component podría resumirse com un client avançat de MySQL.
Procedim a la seva instalació:
$ yum install mysql-shell
3. Instalació de MySQL Router
Aquest componente és un middleware que proporciona routing transparent entre l'aplicació i el backend de servidors MySQL:

Procedim a la seva instalació:
$ yum install mysql-router-community
I deixarem la seva configuració per a més endavant.
4. Preparatius per a la creació del cluster MySQL InnoDB
Instalem python3 com a depèndencia:
$ yum install python3 $ alternatives --set python /usr/bin/python3
Obrim els ports dels firewalls dels hosts per a permetre la comunicació del cluster:
$ firewall-cmd --permanent --add-port=3306/tcp success $ firewall-cmd --permanent --add-port=33060/tcp success $ firewall-cmd --permanent --add-port=33061/tcp success $ firewall-cmd --reload
Ens conectem al MySQL Shell:
$ mysqlsh \connect root@localhost:3306 Password: xxx Creating a session to 'root@localhost:3306' Please provide the password for 'root@localhost:3306': *********** Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y Fetching schema names for auto-completion... Press ^C to stop. Your MySQL connection id is 19 Server version: 8.0.44 MySQL Community Server - GPL No default schema selected; type \use to set one.
I des de la mateixa, crearem un admin account (InnoDB Cluster server configuration account):
MySQL localhost:3306 ssl JS > dba.configureInstance() Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster... This instance reports its own address as localhost.localdomain:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster. 1) Create remotely usable account for 'root' with same grants and password 2) Create a new admin account for InnoDB cluster with minimal required grants 3) Ignore and continue 4) Cancel Please select an option [1]: 2 Please provide an account name (e.g: icroot@%) to have it created with the necessary privileges or leave empty and press Enter to cancel. Account Name: icroot@% Password for new account: MYpass2026! Confirm password: MYpass2026! NOTE: Some configuration options need to be fixed: +----------------------------------------+---------------+----------------+--------------------------------------------------+ | Variable | Current Value | Required Value | Note | +----------------------------------------+---------------+----------------+--------------------------------------------------+ | binlog_transaction_dependency_tracking | COMMIT_ORDER | WRITESET | Update the server variable | | enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server | | gtid_mode | OFF | ON | Update read-only variable and restart the server | | server_id | 1 | | Update read-only variable and restart the server | +----------------------------------------+---------------+----------------+--------------------------------------------------+ Some variables need to be changed, but cannot be done dynamically on the server. Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y Creating user icroot@%. Account icroot@% was successfully created. Configuring instance... WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287). The instance 'localhost.localdomain:3306' was configured to be used in an InnoDB cluster. Restarting MySQL... NOTE: MySQL server at localhost.localdomain:3306 was restarted. MySQL localhost:3306 ssl JS >
4.1 Una nota sobre els usuaris
A un cluster MySQL InnoDB tindrem almenys 3 tipus d'usuaris: el root (un root propi del servei, no confondre amb el root de sistema, que també existirà, però amb permisos només en l'àmbit de localhost), el/s administrator account/s i els relatius al MySQL Router. Pots trobar més informació al respecte a la documentació de a continuació. En éste tutorial haremos uso del admin account creado previamente y del de MySQL Router:
These accounts can be used to administer an InnoDB Cluster after you have completed the configuration process. To create an InnoDB Cluster administrator account for an InnoDB ClusterSet deployment, you issue a cluster.setupAdminAccount() command after you have added all the instances to that cluster.
These accounts are used by MySQL Router to connect to server instances in an InnoDB Cluster. The process to create a MySQL Router account is the same as for an InnoDB Cluster administrator account, but using a cluster.setupRouterAccount() command.
Com els nodes han de poder treballar per xarxa, ajustem el següent al my.cnf per a fer-ho possible:
$ vi /etc/my.cnf report_host = 192.168.2.80 :wq $ systemctl restart mysqld
A cada node li posarem la seva ip (al my01 -> 192.168.2.80, al my02 -> 192.168.2.81 i al my03 -> 192.168.2.82).
Farem un check per veure si existeix algún impediment per a que els nodes puguin passar a formar part d'un cluster:
$ mysqlsh
mysql-js> dba.checkInstanceConfiguration('icroot@192.168.2.80:3306')
Please provide the password for 'icroot@192.168.2.80:3306': ***********
Save password for 'icroot@192.168.2.80:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...
This instance reports its own address as 192.168.2.80:3306
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
Checking instance configuration...
Instance configuration is compatible with InnoDB cluster
The instance '192.168.2.80:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}
Tot correcte. Ara anirem a configurar instàncies per a poder formar el cluster InnoDB:
$ mysqlsh
MySQL JS > dba.configureInstance('icroot@192.168.2.80:3306')
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
This instance reports its own address as 192.168.2.80:3306
applierWorkerThreads will be set to the default value of 4.
The instance '192.168.2.80:3306' is valid to be used in an InnoDB cluster.
The instance '192.168.2.80:3306' is already ready to be used in an InnoDB cluster.
Successfully enabled parallel appliers.
A partir d'aquí, els següents passos només seràn necessaris fer-los al 1er node.
5. Creació del cluster MySQL InnoDB
$ mysqlsh
MySQL JS > \connect icroot@192.168.2.80:3306
Creating a session to 'icroot@192.168.2.80:3306'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 17
Server version: 8.0.44 MySQL Community Server - GPL
No default schema selected; type \use to set one.
MySQL 192.168.2.80:3306 ssl JS > var cluster = dba.createCluster('testCluster')
A new InnoDB Cluster will be created on instance '192.168.2.80:3306'.
Validating instance configuration at 192.168.2.80:3306...
This instance reports its own address as 192.168.2.80:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '192.168.2.80:3306'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
Creating InnoDB Cluster 'testCluster' on '192.168.2.80:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
MySQL 192.168.2.80:3306 ssl JS > cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.2.80:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"192.168.2.80:3306": {
"address": "192.168.2.80:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.2.80:3306"
}
En aquest punt ja podem veure el 1er node del nostre cluster, que s'ha erigit com a primari.
Si ho volem veure de nou però pel que sigui hem tancat la shell, ho podem fer amb la següent query SQL:
MySQL 192.168.2.80:3306 ssl JS > \sql SELECT * FROM performance_schema.replication_group_members; Fetching global names for auto-completion... Press ^C to stop. +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8d2b6a7f-c896-11f0-93a9-bc241154d3c8 | 192.168.2.80 | 3306 | OFFLINE | | | MySQL | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ 1 row in set (0.0004 sec)
Afegim la 2ª instància al cluster:
MySQL 192.168.2.80:3306 ssl JS > cluster.addInstance('icroot@192.168.2.81:3306')
NOTE: The target instance '192.168.2.81:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '192.168.2.81:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
Validating instance configuration at 192.168.2.81:3306...
This instance reports its own address as 192.168.2.81:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '192.168.2.81:3306'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 192.168.2.81:3306 is being cloned from 192.168.2.80:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 192.168.2.81:3306 is shutting down...
* Waiting for server restart... ready
* 192.168.2.81:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 75.29 MB transferred in about 1 second (~75.29 MB/s)
State recovery already finished for '192.168.2.81:3306'
The instance '192.168.2.81:3306' was successfully added to the cluster.
Fet. Passem a veure el nou estat del cluster ara ja amb 2 nodes, el primari i aquest que acabem d'afegir, el secundari:
MySQL 192.168.2.80:3306 ssl JS > cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.2.80:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"192.168.2.80:3306": {
"address": "192.168.2.80:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
},
"192.168.2.81:3306": {
"address": "192.168.2.81:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.2.80:3306"
}
Com podem veure, als camps status i statusText ens avisa que en aquest estat, per tema de quorum, el cluster encara no és tolerant a falles.
I així és cóm es veu fent la query SQL:
MySQL 192.168.2.80:3306 ssl JS > \sql SELECT * FROM performance_schema.replication_group_members; Fetching global names for auto-completion... Press ^C to stop. +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8d2b6a7f-c896-11f0-93a9-bc241154d3c8 | 192.168.2.80 | 3306 | ONLINE | PRIMARY | 8.0.44 | MySQL | | group_replication_applier | 9cee4b16-de9a-11f0-b103-bc24116bd82a | 192.168.2.81 | 3306 | ONLINE | SECONDARY | 8.0.44 | MySQL | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+ 2 rows in set (0.0005 sec)
Repetirems el procediment per a afegir el 3er i darrer node al cluster:
MySQL 192.168.2.80:3306 ssl JS > cluster.addInstance('icroot@192.168.2.82:3306')
I revisarem finalment el seu estat:
MySQL 192.168.2.80:3306 ssl JS > cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.2.80:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.2.80:3306": {
"address": "192.168.2.80:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
},
"192.168.2.81:3306": {
"address": "192.168.2.81:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
},
"192.168.2.82:3306": {
"address": "192.168.2.82:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.44"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.2.80:3306"
}
MySQL 192.168.2.80:3306 ssl JS > \sql SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 8d2b6a7f-c896-11f0-93a9-bc241154d3c8 | 192.168.2.80 | 3306 | ONLINE | PRIMARY | 8.0.44 | MySQL |
| group_replication_applier | 9cee4b16-de9a-11f0-b103-bc24116bd82a | 192.168.2.81 | 3306 | ONLINE | SECONDARY | 8.0.44 | MySQL |
| group_replication_applier | b7c7fba0-de9b-11f0-a4c2-bc24111ef4d6 | 192.168.2.82 | 3306 | ONLINE | SECONDARY | 8.0.44 | MySQL |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.0007 sec)
En aquest punt ja tenim el cluster aixecat, funcionant i toralent a la caiguda d'un node (qualsevol d'ells). Fora d'això, podem tenir problemes, així que procurarem que les intervencions o sucessos afectin només a 1 node al mateix temps en la mesura de possible.
6. Routing a les instàncies
Si fem unes quantes proves amb el cluster per veure cóm es comporta, veurem que en general, mentres quedi sempre un node viu, el cluster es recomposa, i si és el master el node afectat, el cluster promociona un esclau para recuperar-lo. Però si perdem el node master, si ens connectem al cluster a través de la seva adreça ip, ¿no ens quedarem sense accés al mateix?
La resposta és sí, i el motiu és que en tema de clusters de HA no és bona pràctica connectar-se a una adreça ip en concret directament (a no ser que es trati d'una ip especial) per motivos obvis: si el node d'aquella ip no està disponible, no tindrem accés al servei. Per a això està el MySQL Router. A la seva documentació comenta d'instalar-lo a l'application server (el mateix host a on s'executa l'aplicació), però si no volem fer això (p.ex. per tenir una separació més clara/neta entre el servidor d'aplicacions i els components pertanyents a la base de dades), existexi l'alternativa d'ubicar-lo als mateixos nodes del cluster (a tots ells), bé fent ús d'un Network Load Balancer ó d'una Virtual IP. Exjemple d'aquest darrer és fer ús de keepalived, una eina que funciona amb el protocol VRRP per dotar d'alta disponibilitat a una ip que viurà flotant entre els nodes.
Així doncs, ens posarem mans a l'obra per a configurar MySQL Router, així com també per a desplegar i configurar keepalived. Aquests són els passos:
6.1 Bootstrap del MySQL Router
Llançem la comanda mysqlrouter amb els paràmetres necessaris per a (1) generar una configuració d'accés al cluster, així com per a (2) crear un usuari amb el que aquest component (MySQL Router) pugui consultar l'estat del cluster i prendre decisions sobre l'enrutat de les consultes en tot moment:
$ mysqlrouter --user=root --bootstrap icroot@localhost:3306 --conf-bind-address=0.0.0.0 --account mysqlrouter --force
Please enter MySQL password for icroot: MYpass2026!
# Bootstrapping system MySQL Router 8.0.44 (MySQL Community - GPL) instance...
Please enter MySQL password for mysqlrouter: MYrouterpass2025!
- Creating account(s)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /etc/mysqlrouter/mysqlrouter.conf
Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak'
# MySQL Router configured for the InnoDB Cluster 'testCluster'
After this MySQL Router has been started with the generated configuration
$ /etc/init.d/mysqlrouter restart
or
$ systemctl start mysqlrouter
or
$ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf
InnoDB Cluster 'testCluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446, /tmp/mysql.sock
- Read/Only Connections: localhost:6447, /tmp/mysqlro.sock
## MySQL X protocol
- Read/Write Connections: localhost:6448, /tmp/mysqlx.sock
- Read/Only Connections: localhost:6449, /tmp/mysqlxro.sock
Adeqüem la configuració generada i alguns permisos:
$ sed -i 's/user=root/#user=root/g' /etc/mysqlrouter/mysqlrouter.conf $ chown -R mysqlrouter:mysqlrouter /etc/mysqlrouter /var/lib/mysqlrouter /var/log/mysqlrouter $ chmod 644 /etc/mysqlrouter/mysqlrouter.conf $ systemctl restart mysqlrouter
Verifiquem que funciona correctament:
[root@localhost ~]# mysql -u mysqlrouter -p -h 127.0.0.1 -P 6446 Enter password: mysql> SELECT @@hostname, @@port; +-----------------------+--------+ | @@hostname | @@port | +-----------------------+--------+ | my01 | 3306 | +-----------------------+--------+
En aquest punt podem crear una base de dades, popularla i fer una prova d'inserció de dades des d'un slave pera comprovar que s'enruta pel master (l'únic que pot escriure a la base de dades):
my01$ mysql -u root -p mysql> CREATE DATABASE test; mysql> CREATE TABLE datos (id INT NOT NULL, nombre VARCHAR(20), PRIMARY KEY (id)); mysql> INSERT INTO datos (id,nombre) VALUES (1, "Pepe"); mysql> INSERT INTO datos (id,nombre) VALUES (2, "Luis"); mysql> INSERT INTO datos (id,nombre) VALUES (3, "Alberto"); mysql> INSERT INTO datos (id,nombre) VALUES (4, "Juan"); mysql> INSERT INTO datos (id,nombre) VALUES (5, "Francisco"); mysql> CREATE USER 'test_user'@'%' IDENTIFIED BY 'Pass-word-123'; mysql> GRANT ALL PRIVILEGES ON test.* TO 'test_user'@'%'; mysql> FLUSH PRIVILEGES; mysql> QUIT; my02$ mysql -h 127.0.0.1 -P 6446 -u test_user -p -e "INSERT INTO datos (id,nombre) VALUES (6, 'Damian')" test my02$ mysql -u test_user -p -e "SELECT * FROM datos" test +----+-----------+ | id | nombre | +----+-----------+ | 1 | Pepe | | 2 | Luis | | 3 | Alberto | | 4 | Juan | | 5 | Francisco | | 6 | Damian | +----+-----------+
Arribats aquí, tenim que dona igual a quin node es faci la operació d'escriptura, ja que el MYSQL Router la derivarà al node master i la resta de peticions (consultes) les podrà fer en local. Però tenim les següents desventajes:
- Les peticions s'envien a les ips dels 3 nodes. Si un o varis no están disponibles, fallaràn.
- Les escriptures, si acaben a un node diferent al master, MySQL Router les tindrà que redirigir, amb el conseqüent overhead de tràfic de xarxa.
Per a aquest motiu, podem voler fer ús d'una VIP que concentri leas peticions al servei de base de dades.
6.2 Instalació i configuració de keepalived
$ yum install keepalived
Creem la configuració del servei:
$ vi /etc/keepalived/keepalived.conf
global_defs {
script_user keepalived_script
enable_script_security
}
vrrp_script check_mysql {
script "/usr/local/bin/check_mysql_master.sh"
interval 2 # Check every 2 seconds
weight 2 # Add priority if script passes
fall 2 # Require 2 failures to consider it down
rise 2 # Require 2 successes to consider it up
}
vrrp_instance VI_1 {
state BACKUP # Set all to BACKUP; let the script decide
interface ens18 # Your main network interface
virtual_router_id 51 # Must be same on all nodes
priority 100 # Base priority
virtual_ipaddress {
#192.168.2.100 # Enough for 'ip s a'
192.168.2.100/24 dev ens18 label ens18:1 # Needed for 'ifconfig'
}
track_script {
check_mysql
}
}
:wq
Així com el script de check de qui és el master:
$ vi /usr/local/bin/check_mysql_master.sh
#!/bin/bash
# Connect to the local MySQL and check if it is the PRIMARY node
# This requires a user with permissions to view performance_schema
IS_PRIMARY=$(mysql -u cluster_check_user -p'K33p-4liv3d' -N -s -e "SELECT MEMBER_ROLE FROM performance_schema.replication_group_members WHERE MEMBER_ID=@@server_uuid;")
if [ "$IS_PRIMARY" = "PRIMARY" ]; then
exit 0 # Success: I am the Master
else
exit 1 # Failure: I am a Slave or down
fi
:wq
Creem l'usuari de sistema que executarà el script de check (ja que és un requisit de seguretat de keepalived):
$ useradd -r -s /sbin/nologin keepalived_script $ chown keepalived_script:keepalived_script /usr/local/bin/check_mysql_master.sh $ chmod 755 /usr/local/bin/check_mysql_master.sh
I també creem l'usuari que consultarà a MySQL qui és el master:
$ mysql -u root -p mysql> CREATE USER 'cluster_check_user'@'localhost' IDENTIFIED BY 'K33p-4liv3d'; mysql> GRANT SELECT ON performance_schema.* TO 'cluster_check_user'@'localhost'; mysql> FLUSH PRIVILEGES;
Permetem al firewall el protocol VRRP i obrim els ports del MySQL Router:
# Open VRRP protocol $ firewall-cmd --permanent --add-rich-rule='rule protocol value="vrrp" accept' # Open MySQL Router Read/Write port $ firewall-cmd --permanent --add-port=6446/tcp # Open MySQL Router Read-Only port $ firewall-cmd --permanent --add-port=6447/tcp $ firewall-cmd --reload
Finalment creem una custom SELinux policy que permetrà l'execució del script de check:
$ ausearch -m avc -ts recent | audit2allow -M my_keepalived_mysql $ semodule -i my_keepalived_mysql.pp
Amb això ja tenim la ip a la que han de atacar les aplicacions al cluster per tal de fer-les passar sempre pel master.
6.3 Probes de keepalived
Podems provar a tumbar (aturar) el node master per veure cóm MySQL InnoDB tria un altre node per a traspassar-li aquest rol (el de nou master), així com apreciar que keepalived mou la ip flotant a aquest mateix node per tal que les consultas es dirigeixin cap a ell.
Tips
Heus aquí una sèrie de trucs que han sigut utilitzats durant l'elaboració d'aquest laboratori.
Check ràpid de l'estat del cluster
Per a ser més àgils, podem crear un àlies amb algún dels següents oneliners (via mysqlsh ó sql, respectivament):
$ mysqlsh icroot@localhost --execute "print(dba.getCluster('testCluster').status())"
ó
$ mysql -h 127.0.0.1 -P 6446 -u icroot -p -e "SELECT MEMBER_HOST, MEMBER_STATE, MEMBER_ROLE FROM performance_schema.replication_group_members;"
Interrupció en la creació del cluster
Si al moment de creació del cluster amb els 3 nodes hi ha hagut alguna interrupció (p.ex. reinici del host principal, o aturem el node per a continuar un altre dia), serà necessari reinstanciar el cluster:
// This will scan the metadata and restart the cluster on this node MySQL 192.168.2.80:3306 ssl JS > var cluster = dba.rebootClusterFromCompleteOutage()
¿Cóm connectar-se al cluster des de MySQL Shell un cop que ja està creat?
// Assign the existing cluster to the variable 'cluster' first
MySQL 192.168.2.80:3306 ssl JS > var cluster = dba.getCluster() ó var cluster = dba.getCluster('testCluster')
MySQL 192.168.2.80:3306 ssl JS > cluster.status()
¿Cóm recupero el cluster després d'un apagat de tots els nodes?
Cuando s'aturen tots els nodes (CompleteOutage), el Group Replication (GR) plugin es desactiva per seguretat per prevenir casos de split brain (existeix el risc que un node amb dades més antigues s'erigeix com el node primari abans que el primari real). Hem d'iniciar els nodes (dona igual l'ordre) i quan estiguin tots iniciats, llançarem des del que creiem que era el master, la següent comanda per a reinicialitzar el Group Replication:
MySQL 192.168.2.80:3306 ssl JS > var cluster = dba.rebootClusterFromCompleteOutage('testCluster');
Restoring the Cluster 'testCluster' from complete outage...
Cluster instances: '192.168.2.80:3306' (OFFLINE), '192.168.2.81:3306' (OFFLINE), '192.168.2.82:3306' (OFFLINE)
Waiting for instances to apply pending received transactions...
Validating instance configuration at 192.168.2.80:3306...
This instance reports its own address as 192.168.2.80:3306
Instance configuration is suitable.
* Waiting for seed instance to become ONLINE...
192.168.2.80:3306 was restored.
Validating instance configuration at 192.168.2.81:3306...
This instance reports its own address as 192.168.2.81:3306
Instance configuration is suitable.
Rejoining instance '192.168.2.81:3306' to cluster 'testCluster'...
Re-creating recovery account...
NOTE: User 'mysql_innodb_cluster_3068219787'@'%' already existed at instance '192.168.2.80:3306'. It will be deleted and created again with a new password.
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
The instance '192.168.2.81:3306' was successfully rejoined to the cluster.
Validating instance configuration at 192.168.2.82:3306...
This instance reports its own address as 192.168.2.82:3306
Instance configuration is suitable.
Rejoining instance '192.168.2.82:3306' to cluster 'testCluster'...
Re-creating recovery account...
NOTE: User 'mysql_innodb_cluster_3124054725'@'%' already existed at instance '192.168.2.80:3306'. It will be deleted and created again with a new password.
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
The instance '192.168.2.82:3306' was successfully rejoined to the cluster.
The Cluster was successfully rebooted.
¿Cóm recupero el cluster si no tinc tots els nodes disponibles?
Ho podem fer del mateix mode que la comanda del punt previ, però indicant el paràmetre {force: true} per forçar la seva recuperació a pesar de no disposar de tots els nodes:
MySQL 192.168.2.80:3306 ssl JS > var cluster = dba.rebootClusterFromCompleteOutage('testCluster',{force: true});
Restoring the Cluster 'testCluster' from complete outage...
Cluster instances: '192.168.2.80:3306' (OFFLINE), '192.168.2.81:3306' (UNREACHABLE), '192.168.2.82:3306' (UNREACHABLE)
WARNING: One or more instances of the Cluster could not be reached and cannot be rejoined nor ensured to be OFFLINE: '192.168.2.81:3306', '192.168.2.82:3306'. Cluster may diverge and become inconsistent unless all instances are either reachable or certain to be OFFLINE and not accepting new transactions. You may use the 'force' option to bypass this check and proceed anyway.
Waiting for instances to apply pending received transactions...
Validating instance configuration at 192.168.2.80:3306...
This instance reports its own address as 192.168.2.80:3306
Instance configuration is suitable.
* Waiting for seed instance to become ONLINE...
192.168.2.80:3306 was restored.
The Cluster was successfully rebooted.
Amb això arrancarà el cluster, encara que només sigui amb 1 sol node.
