MySQL&MariaDB临时表与临时文件

  1. mysql何时会使用临时表

 1UNION查询;——最新版本的MariaDB10.1中UNION ALL不再使用临时表

2、用到TEMPTABLE算法或者是UNION查询中的视图;

3ORDER BYGROUP BY的子句不一样时;

4、表连接中,ORDER BY的列不是驱动表中的;

5DISTINCT查询并且加上ORDER BY时;

6SQL中用到SQL_SMALL_RESULT选项时;

7FROM中的子查询;

8、子查询或者semi-join时创建的表;

 可以通过explain查看extra列:Using temporary表示使用临时表

  

  1. 临时表何时会变成磁盘临时表写入临时文件

如果临时表中需要存储的数据量超过了上限( tmp_table_size max_heap_table_size 中取其大者),这时候就需要生成磁盘临时表。

Mariadb的磁盘临时表默认是aria存储引擎的。

所以会在tmpdir目录下生成类似如下的文件:

-rw-rw—-. 1 mariadb mariadb 1.2G Aug 14 10:06 #sql_750a_0.MAD

-rw-rw—-. 1 mariadb mariadb 8.0K Aug 12 17:52 #sql_750a_0.MAI

其中MADaria表的数据文件,MAIaria表的索引文件

 在以下几种情况下,会直接创建磁盘临时表:

 1、数据表中包含BLOB/TEXT列;

2、在 GROUP BY 或者 DSTINCT 的列中有超过 512字符的字符类型列(或者超过 512字节的二进制类型列);

3、在SELECTUNIONUNION ALL查询中,存在最大长度超过512的列(对于字符串类型是512个字符,对于二进制类型则是512字节);

4、执行SHOW COLUMNS/FIELDSDESCRIBESQL命令,因为它们的执行结果用到了BLOB列类型。

  

  1. 还有什么操作会使用临时文件

 仅列出已知的一些操作:

  1. Order by非索引字段
  2. 增加索引
  3. 创建分区表
  4. Show create table 分区表

 

  1. 如何查看是否使用了临时表、磁盘临时表、临时文件

MariaDB [lzk]> show status like ‘Created_tmp%’;

+————————-+——-+

| Variable_name           | Value |

+————————-+——-+

| Created_tmp_disk_tables | 3     |    —-使用磁盘临时表次数

| Created_tmp_files       | 38    |   —-使用临时文件次数

| Created_tmp_tables      | 9     |   —-使用临时表次数

+————————-+——-+

3 rows in set (0.00 sec)

MariaDB 10.1主要新特性

1、10.1中默认包含Galera
2、可以对表、表空间、日志进行加密
需要安装file_key_management 插件
未安装加密插件的slave可以正确的复制加密的主
使用mysqlbinlog工具无法查看加密后的binlog

 

3、innodb/xtradb页压缩
区别于row_format=compressed,在缓存中会同时存在压缩页与非压缩页;新的页压缩功能只有在写入文件系统前进行压缩。
需要配置innodb-file-format=Barracuda 、 innodb-file-per-table=1
除zlib外,还支持lz4、lzo、lzma、bzip2、snappy压缩算法,但是默认版本不包含,需要先安装上述压缩算法后编译mariadb才能生效。
安装lzo后编译时能识别出该算法,但是编译报错。
但是实际测试page_compressed=1表数据文件并没有压缩,row_format=compressed可以正常压缩
4、复制:
基于domain_id复制过滤器
必须在MASTER_USE_GTID不为no的前提下使用,并且不能同时设置DO_DOMAIN_IDS 和 IGNORE_DOMAIN_IDS,只能选一个。例:

 

乐观模式并行复制
增强的半同步复制,收到备机响应后提交事务
新增rpl_semi_sync_master_wait_point配置项:可配置为AFTER_SYNC或AFTER_COMMIT。
row模式下slave上的trigger可以生效
新增slave_run_triggers_for_rbr配置项,使row模式下的slave可以触发slave上独有的trigger。
    增强dump线程,多个slave可以更快的并发读取binlog
在特定的并行场景下事务的commit立即完成,避免在很多事务锁冲突时降低吞吐量

 

RESET MASTER增加TO关键字
指定TO后,以指定的数字作为第一个binlog文件

 

    修复很多复制bug
5、增加默认角色
可以为用户指定默认角色:SET DEFAULT ROLE { role | NONE } [ FOR user@host ]
mysql.user表中增加default_role字段
6、优化
优化某些场景的ORDER BY
Always uses “range” and (not full “index” scan) when it switches to an index to satisfy ORDER BY … LIMIT
实际测试执行计划与mariadb10.0.12完全一致,不知道什么场景下会优化为range。
    临时表不再创建frm文件
MAX_STATEMENT_TIME可以用于中断超过该时长的长查询
可以设置全局、会话、用户的最长执行时间。可以中断任何语句(除存储过程)。MySQL5.7也有类似功能,只能中断SELECT语句。
还可以单独对语句进行设置:

 

UNION ALL不再使用临时表
10.1.12:

10.0.12:
    提高在POWER8上60%的吞吐量
    更少调用malloc(),使简单查询更快
自动发现performace schema表,performace schema表不再使用.frm文件
[rdb7@redhat64-26 ~]$ ls /home/rdb7/data/data/performance_schema/
db.opt
并且在系统初始化脚本中已无创建performance_schema的sql语句。
    改善xid缓存可伸缩性(通过使用 lock-free hash)
7、一些GIS特性
8、语法:
    全面支持IF EXISTS,IF NOT EXISTS,OR REPLACE
可以在存储过程之外使用混合语句
可以在存储过程外使用复合语句。
仅支持BEGIN, IF, CASE, LOOP, WHILE, REPEAT语法。且begin需要写为begin not atomic区别于原来用户开启事务的begin语法。
慢查询日志里会记录update或delete影响的行数
在慢查询日志中增加了Rows_affected项:
[rdb7@redhat64-26 data]$ mysqldumpslow redhat64-26-slow.log
Reading mysql slow query log from redhat64-26-slow.log
Count: 2  Time=0.09s (0s)  Lock=0.00s (0s)  Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=1.0 (2), root[root]@localhost
  insert into a select N,N
Count: 1  Time=0.09s (0s)  Lock=0.00s (0s)  Rows_sent=0.0 (0), Rows_examined=2.0 (2), Rows_affected=2.0 (2), root[root]@localhost
  delete from a
9、innodb/xtradb:
    支持最大64K页
    合入Facebook/Kakao碎片整理patch,可以使用optimize table对innodb表空间碎片整理
    可以设置innodb表必须包含主键
10、修改部分参数变量
11、增加了几个密码插件
12、修复一些安全问题

高可用主从复制方案环境搭建Mysql+Corosync+Pacemaker+DRBD

  • 概述

Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。

Pacemaker是一个集群资源管理器。它利用你喜欢的集群基础构件(OpenAIS 或heartbeat)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性。

Distributed Replicated Block Device(DRBD)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。数据镜像:实时、透明、同步(所有服务器都成功后返回)、异步(本地服务器成功后返回)。DRBD的核心功能通过Linux的内核实现,最接近系统的IO栈,但它不能神奇地添加上层的功能比如检测到EXT3文件系统的崩溃。DRBD的位置处于文件系统以下,比文件系统更加靠近操作系统内核及IO栈。

  • 环境准备

2台64位redhat 6.5:Node1:10.47.169.235,Node2:10.47.169.177

  1. 配置节点间主机名相互解析

Node1:

[root@linux235 ~]# hostname

linux235

[root@linux235 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.47.169.177   linux177

10.47.169.235   linux235

Node2:

[root@linux177 ~]# hostname

Linux177

[root@linux177 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.47.169.177   linux177

10.47.169.235   linux235

  • 配置节点间ssh互信

Node1:

[root@linux235 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ”

[root@linux235 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@linux177

Node2:

[root@linux177 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ”

[root@linux177 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@linux235

  • 关闭防火墙与SELinux

Node1:

[root@linux235 ~]# service iptables stop

[root@linux235 ~]# chkconfig iptables off

[root@linux235 ~]# vi /etc/selinux/config

修改配置项SELINUX=disabled

Node2:

[root@linux177 ~]# service iptables stop

[root@linux177 ~]# chkconfig iptables off

[root@linux177 ~]# vi /etc/selinux/config

修改配置项SELINUX=disabled

  • 上传安装文件并挂载安装光盘

rhel-server-6.5-x86_64-dvd.iso

drbd84-utils-8.4.4-2.el6.elrepo.x86_64.rpm

kmod-drbd84-8.4.4-1.el6.elrepo.x86_64.rpm

python-pssh-2.3.1-4.3.x86_64.rpm

pssh-2.3.1-4.3.x86_64.rpm

crmsh-2.1-1.2.x86_64.rpm

文件上传到2台服务器的/root目录下

Node1:

清理yum缓存:

[root@linux235 ~]# rm -rf /var/cache/yum/

创建光盘加载路径:

[root@linux235 ~]# mkdir /mnt/cdrom

挂载 ISO文件:

[root@linux235 ~]# mount -o loop /root/rhel-server-6.5-x86_64-dvd.iso /mnt/cdrom

修改yum安装配置:

[root@linux235 ~]# vi /etc/yum.repos.d/rhel-source.repo

[Server]

name=Server

baseurl=file:///mnt/cdrom/Server

enabled=1

gpgcheck=0

[HighAvailability]

name=HighAvailability

baseurl=file:///mnt/cdrom/HighAvailability

enabled=1

gpgcheck=0

[LoadBalancer]

name=LoadBalancer

baseurl=file:///mnt/cdrom/LoadBalancer

enabled=1

gpgcheck=0

[ScalableFileSystem]

name=ScalableFileSystem

baseurl=file:///mnt/cdrom/ScalableFileSystem

enabled=1

gpgcheck=0

[ResilientStorage]

name=ResilientStorage

baseurl=file:///mnt/cdrom/ResilientStorage

enabled=1

gpgcheck=0

Node2:

清理yum缓存:

[root@linux177 ~]# rm -rf /var/cache/yum/

创建光盘加载路径:

[root@linux177 ~]# mkdir /mnt/cdrom

挂载 ISO文件:

[root@linux177 ~]# mount -o loop /root/rhel-server-6.5-x86_64-dvd.iso /mnt/cdrom

修改yum安装配置:

[root@linux177 ~]# vi /etc/yum.repos.d/rhel-source.repo

内容同node1

  • 创建Mysql安装用户

Node1:

[root@linux235 ~]# groupadd rdb –g 501

[root@linux235 ~]# useradd rdb –g rdb –d /home/rdb –u 501

Node2:

[root@linux177 ~]# groupadd rdb –g 501

[root@linux177 ~]# useradd rdb –g rdb –d /home/rdb –u 501

注:此处的组ID和用户ID建议大于500,可以使用cat /etc/passwd查看已使用的用户ID,但是两台服务器必须一致,否则后面会产生权限错误问题。

  • Corosync安装与配置
  • 安装corosync

Node1:

[root@linux235 ~]# yum install -y corosync

Node2:

[root@linux177 ~]# yum install -y corosync

  • 配置corosync

Node1:

[root@linux235 ~]# cd /etc/corosync/

[root@linux235 corosync]# cp corosync.conf.example corosync.conf

[root@linux235 corosync]# vi corosync.conf

# Please read the corosync.conf.5 manual page

compatibility: whitetank

totem {

        version: 2

        secauth: off

        threads: 0

        interface {

                ringnumber: 0

                bindnetaddr: 10.47.169.0  #心跳线网段 

                mcastaddr: 226.94.1.1  #组播传播心跳信息

                mcastport: 5405  #组播端口

                ttl: 1

        }

}

logging {

        fileline: off

        to_stderr: no

        to_logfile: yes

        to_syslog: no  #不记入系统日志

        logfile: /var/log/cluster/corosync.log  #日志位置

        debug: off

        timestamp: on

        logger_subsys {

                subsys: AMF

                debug: off

        }

}

amf {

        mode: disabled

}

#启用pacemaker

service {  

    ver: 0    

    name: pacemaker    

}

aisexec {  

    user: root    

    group: root    

}

  • 生成密钥文件

[root@linux235 corosync]# corosync-keygen

会生成一个authkey文件

  • 将配置文件及密钥文件复制到node2上

[root@linux235 corosync]# scp authkey corosync.conf linux177:/etc/corosync/

  • Pacemaker安装与配置
  • 安装pacemaker

Node1:

[root@linux235 ~]# yum install -y pacemaker

Node2:

[root@linux177 ~]# yum install -y pacemaker

  • 安装crmsh(pacemaker管理工具)

Node1:

[root@linux235 ~]# rpm –ivh /mnt/cdrom/Packages/redhat-rpm-config-9.0.3-42.el6.noarch.rpm

[root@linux235 ~]# rpm –ivh python-pssh-2.3.1-4.3.x86_64.rpm

[root@linux235 ~]# rpm –ivh pssh-2.3.1-4.3.x86_64.rpm

[root@linux235 ~]# rpm –ivh crmsh-2.1-1.2.x86_64.rpm

Node2:

[root@linux177 ~]# rpm –ivh /mnt/cdrom/Packages/redhat-rpm-config-9.0.3-42.el6.noarch.rpm

[root@linux235 ~]# rpm –ivh python-pssh-2.3.1-4.3.x86_64.rpm

[root@linux177 ~]# rpm –ivh pssh-2.3.1-4.3.x86_64.rpm

[root@linux177 ~]# rpm –ivh crmsh-2.1-1.2.x86_64.rpm

  • 启动corosync+pacemaker

在配置corosync时,将pacemaker整合进corosync中,corosync启动的同时也会启动pacemaker。

Node1:

[root@linux235 ~]# service corosync start

Node2:

[root@linux177 ~]# service corosync start

  • 查看启动信息
  • 查看corosync引擎是否正常启动

[root@linux235 ~]# grep -e “Corosync Cluster Engine” -e “configuration file” /var/log/cluster/corosync.log

Nov 24 09:11:37 corosync [MAIN  ] Corosync Cluster Engine (‘1.4.1’): started and ready to provide service.

Nov 24 09:11:37 corosync [MAIN  ] Successfully read main configuration file ‘/etc/corosync/corosync.conf’.

  • 查看初始化成员节点通知是否正常发出

[root@linux235 ~]# grep  TOTEM /var/log/cluster/corosync.log

Nov 24 09:11:37 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).

Nov 24 09:11:37 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Nov 24 09:11:38 corosync [TOTEM ] The network interface [10.47.169.235] is now up.

Nov 24 09:11:38 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.

  • 查看pacemaker是否正常启动

[root@linux235 ~]# grep pcmk_startup /var/log/cluster/corosync.log

Nov 24 09:11:38 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized

Nov 24 09:11:38 corosync [pcmk  ] Logging: Initialized pcmk_startup

Nov 24 09:11:38 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615

Nov 24 09:11:38 corosync [pcmk  ] info: pcmk_startup: Service: 9

Nov 24 09:11:38 corosync [pcmk  ] info: pcmk_startup: Local hostname: linux235

  • 查看集群状态

[root@linux235 ~]#  crm status

Last updated: Wed Nov 25 16:10:47 2015

Last change: Tue Nov 24 10:54:40 2015 via cibadmin on linux177

Stack: classic openais (with plugin)

Current DC: linux235 – partition with quorum

Version: 1.1.10-14.el6-368c726

2 Nodes configured, 2 expected votes

0 Resources configured

Online: [ linux177 linux235 ]

linux177、linux235两个节点都在线,DC是linux235。

  • DRBD安装与配置
  • 安装drbd

Node1:

[root@linux235 ~]# rpm -ivh drbd84-utils-8.4.4-2.el6.elrepo.x86_64.rpm kmod-drbd84-8.4.4-1.el6.elrepo.x86_64.rpm

Node2:

[root@linux177 ~]# rpm -ivh drbd84-utils-8.4.4-2.el6.elrepo.x86_64.rpm kmod-drbd84-8.4.4-1.el6.elrepo.x86_64.rpm

  • 配置drbd

Node1:

[root@linux235 ~]# cd /etc/drbd.d

[root@linux235 drbd.d]# vi global_common.conf

global {

        usage-count no; #不参与drbd资源统计

}

common {

        protocol C;  #使用完全同步复制协议

        handlers {

                pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;

                pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;

                local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”;

        }

        startup {

#               wfc-timeout 10;

        }

        options {

        }

        disk {

                on-io-error detach;  #同步错误的做法是分离

        }

        net {

                 cram-hmac-alg “sha1”;   #设置加密算法sha1

                 shared-secret “mydrbd”;   #设置加密key

after-sb-0pri discard-zero-changes;  #设置脑裂修复策略

        }

}

  • 配置新资源

Node1:

[root@linux235 drbd.d]# vi mysql.res

resource mysql{

    on linux235 {

        device       /dev/drbd0;

        disk         /dev/sdb2;

        address      10.47.169.235:6669;  

        meta-disk    internal;

    }

    on linux177 {

        device       /dev/drbd0;

        disk         /dev/sdb2;

        address      10.47.169.177:6669;

        meta-disk    internal;

    }

}

  • 将配置文件复制到node2

Node1:

[root@linux235 drbd.d]# scp global_common.conf web.res linux177:/etc/drbd.d/

  • 初始化资源

Node1:

[root@linux235 ~]# dd if=/dev/zero bs=1M count=1 of=/dev/sdb2

[root@linux235 ~]# drbdadm create-md mysql

Node2:

[root@linux177 ~]# dd if=/dev/zero bs=1M count=1 of=/dev/sdb2

[root@linux177 ~]# drbdadm create-md mysql

  • 启动DRBD

Node1:

[root@linux235 ~]# service drbd start

Node2:

[root@linux177 ~]# service drbd start

  • 设置node2为主节点

Node2:

[root@linux177 ~]# drbdadm — –overwrite-data-of-peer primary mysql

同步数据,需要一段时间全部同步完成,查看完成状态:

Node2:

[root@linux177 ~]# drbd-overview

  0:web/0  Connected Primary/Secondary UpToDate/UpToDate C r—–

[root@linux177 ~]# cat /proc/drbd

version: 8.4.4 (api:1/proto:86-101)

GIT-hash: 599f286440bd633d15d5ff985204aff4bccffadd build by phil@Build64R6, 2013-10-14 15:33:06

 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–

    ns:388 nr:632 dw:1024 dr:6853 al:8 bm:12 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

cs:连接状态,ro:角色,ds:磁盘状态

  • 格式化并挂载

Node2:

[root@linux177 ~]# mkfs.ext4 /dev/drbd0

[root@linux177 ~]# mkdir /data

[root@linux177 ~]# mount –t ext4 /dev/drbd0 /data

[root@linux177 ~]# chown rdb:rdb –R /data

  • 设置node1为主节点

Node2:

[root@linux177 ~]# umount /data/

[root@linux177 ~]# drbdadm secondary mysql

Node1:

[root@linux235 ~]# drbdadm primary mysql

[root@linux235 ~]# drbd-overview

0:web/0 Connected Primary/SecondaryUpToDate/UpToDateC r—–

可以看到已经切换为主

[root@linux235 ~]# mkdir /data

[root@linux235 ~]# mount –t ext4 /dev/drbd0 /data

  • Mysql安装与配置

以MariaDB10.0.12为例,其余版本mysql请参考各自安装文档。

  1. 上传mysql安装程序

Node1:

[root@linux235 ~]# su – rdb

上传到rdb用户HOME目录下

  • 解压并修改配置文件

Node1:

[rdb@linux235 ~]$ tar –xzf mariadb64_linux.tar.gz

[rdb@linux235 ~]$ cp bin etc lib log share ~/

[rdb@linux235 ~]$ cd ~

[rdb@linux235 ~]$ vi etc/my.cnf

# Don’t change configuration items to other sections

[general]

instance_num=1

#

# The MariaDB server

#

[mysqld]

# generic configuration options

table_open_cache = 2048

max_allowed_packet = 16M

sort_buffer_size = 512K

read_buffer_size = 256K

read_rnd_buffer_size = 512K

max_connect_errors = 100000

skip-external-locking

sql_mode = STRICT_TRANS_TABLES

sync_binlog=1

expire_logs_days=7

#*** MyISAM Specific options

key_buffer_size = 16M

myisam_sort_buffer_size = 8M

# *** INNODB Specific options ***

innodb_data_file_path = ibdata1:500M:autoextend

innodb-file-per-table

innodb_buffer_pool_size = 600M

innodb_flush_log_at_trx_commit = 1

innodb_io_capacity = 200

innodb_io_capacity_max = 2000

innodb_lock_wait_timeout = 50

innodb_log_buffer_size = 8M

innodb_log_files_in_group = 2

innodb_log_file_size = 100M

innodb_read_io_threads = 8

innodb_write_io_threads = 8

#the default size of binlog file is set to 10M

max_binlog_size = 10485760

# binary logging format

binlog_format=ROW

# the slave data obtained by replication also credited in binlog when log-slave-updates=1

log-slave-updates=1

# required unique id between 1 and 2^32 – 1

# defaults to 1 if master-host is not set

# but will not function as a master if omitted

binlog_annotate_row_events=ON

replicate_annotate_row_events=ON

replicate_events_marked_for_skip=FILTER_ON_MASTER

slave-skip-errors=1007,1008,1050,1060,1061,1062,1068

[mysqld1]

port            = 5518

socket          = /home/rdb/bin/mysql1.sock

bind_address = 0.0.0.0

datadir  = /data/data

log-error=/home/rdb/log/mysqld1.log

pid-file=/home/rdb/bin/mysqld1.pid

innodb_data_home_dir = /data/data

innodb_log_group_home_dir = /data/redo

server-id       = 1

log-bin=/data/binlog/mysql-bin

relay-log=/data/relaylog/relay-bin

#

# The following options will be read by MariaDB client applications.

# Note that only client applications shipped by MariaDB are guaranteed

# to read this section. If you want your own MariaDB client program to

# honor these values, you need to specify it as an option during the

# MariaDB client library initialization.

#

[client]

port            = 5518

socket          = /home/rdb/bin/mysql1.sock

[mysql]

no-auto-rehash

# Only allow UPDATEs and DELETEs that use keys.

#safe-updates

注意粗体部分的修改

  • 初始化Mysql

Node1:

根据上一节的配置创建目录:

[rdb@linux235 ~]$ mkdir –p /data/data /data/redo /data/binlog /data/relaylog

初始化:

[rdb@linux235 ~]$ sh bin/mysql_install_db

  • 将Mysql复制到node2

Node1:

[rdb@linux235 ~]$ scp -r bin etc lib log share rdb@linux177:/home/rdb

  • 将启动脚本上传到node1、node2的/etc/init.d目录

[rdb@linux235 ~]$ cat /etc/init.d/mysqld

#!/bin/sh

#

# set +x

start()

{

    su – $RDB_USER -c “$RDB_HOME/bin/mysql.server start $1”

    if [ “$?” -ne 0 ]; then

                exit 1

        fi

}

stop()

{

    su – $RDB_USER -c “$RDB_HOME/bin/mysql.server stop $1”

    if [ “$?” -ne 0 ]; then

                exit 1

        fi

}

status()

{

    su – $RDB_USER -c “$RDB_HOME/bin/mysql.server status $1”

    if [ “$?” -ne 0 ]; then

                exit 3

        fi

}

usage()

{

    echo “usage:”

    echo “$0 [start|stop|status] [instance_id|all]”

}

################################################################################

#  main

################################################################################

RDB_USER=”rdb”

RDB_HOME=”/home/rdb”

case “$1” in

    start)

        start “$2”

        ;;

    stop)

        stop “$2”

        ;;

    status)

        status “$2”

        ;;

    *)

        usage

esac

  • 集群配置
  • 关闭drbd并设置开机不启动

Node1:

[root@linux235 ~]# service drbd stop

[root@linux235 ~]# chkconfig drbd off

[root@linux235 ~]# chkconfig corosync on

Node2:

[root@linux177 ~]# service drbd stop

[root@linux177 ~]# chkconfig drbd off

[root@linux177 ~]# chkconfig corosync on

  • 增加drbd资源

Node1:

[root@linux235 ~]# crm

crm(live)# configure

crm(live)configure# property stonith-enabled=false

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mysql op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30

crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

crm(live)configure# commit

crm(live)configure# show

node linux177

node linux235

primitive mysqldrbd ocf:linbit:drbd \

        params drbd_resource=mysql \

        op start timeout=240 interval=0 \

        op stop timeout=100 interval=0 \

        op monitor role=Master interval=20 timeout=30 \

        op monitor role=Slave interval=30 timeout=30

ms ms_mysqldrbd mysqldrbd \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

property cib-bootstrap-options: \

        expected-quorum-votes=2 \

        stonith-enabled=false \

        no-quorum-policy=ignore

Node2:

查看一下集群状态

[root@linux177 ~]# crm status

Last updated: Fri Nov 27 16:04:50 2015

Last change: Fri Nov 27 15:59:17 2015

Current DC: linux177 – partition with quorum

2 Nodes configured

2 Resources configured

Online: [ linux177 linux235 ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]

     Masters: [ linux235 ]

     Slaves: [ linux177 ]

主备已生效

  • 增加文件系统资源

Node1:

crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/data fstype=ext4 op start timeout=60 op stop timeout=60 op monitor interval=20 timeout=40

crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master

crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# show

node linux177

node linux235

primitive mysqldrbd ocf:linbit:drbd \

        params drbd_resource=mysql \

        op start timeout=240 interval=0 \

        op stop timeout=100 interval=0 \

        op monitor role=Master interval=20 timeout=30 \

        op monitor role=Slave interval=30 timeout=30

primitive mystore Filesystem \

        params device=”/dev/drbd0″ directory=”/data” fstype=ext4 \

        op start timeout=60 interval=0 \

        op stop timeout=60 interval=0 \

        op monitor interval=20s timeout=40s

ms ms_mysqldrbd mysqldrbd \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master

order mystore_after_ms_mysqldrbd Mandatory: ms_mysqldrbd:promote mystore:start

property cib-bootstrap-options: \

        expected-quorum-votes=2 \

        stonith-enabled=false \

        no-quorum-policy=ignore \

        dc-version=1.1.11-97629de \

        cluster-infrastructure=”classic openais (with plugin)”

crm(live)configure# exit

mount命令查看/data目录已被挂载好

[root@linux235 ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw,rootcontext=”system_u:object_r:tmpfs_t:s0″)

/dev/sda1 on /boot type ext4 (rw)

/dev/sda5 on /home type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

10.47.169.195:/root/mounttest1 on /root/nfs_root type nfs (rw,vers=4,addr=10.47.169.195,clientaddr=10.47.169.235)

/root/rhel-server-6.6-x86_64-dvd.iso on /mnt/cdrom type iso9660 (rw,loop=/dev/loop0)

/dev/drbd0 on /data type ext4 (rw)

  • 增加mysql资源

Node1:

[root@linux235 ~]# crm

crm(live)# configure

crm(live)configure# primitive mysqld lsb:mysqld op monitor interval=20s timeout=15s

crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore

crm(live)configure# order mysqld_after_mystore mandatory: mystore mysqld

crm(live)configure# show

node linux177

node linux235

primitive mysqld lsb:mysqld \

        op monitor interval=20s timeout=15s

primitive mysqldrbd ocf:linbit:drbd \

        params drbd_resource=mysql \

        op start timeout=240 interval=0 \

        op stop timeout=100 interval=0 \

        op monitor role=Master interval=20 timeout=30 \

        op monitor role=Slave interval=30 timeout=30

primitive mystore Filesystem \

        params device=”/dev/drbd0″ directory=”/data” fstype=ext4 \

        op start timeout=60 interval=0 \

        op stop timeout=60 interval=0 \

        op monitor interval=20s timeout=40s

ms ms_mysqldrbd mysqldrbd \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

colocation mysqld_with_mystore inf: mysqld mystore

colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master

order mysqld_after_mystore Mandatory: mystore mysqld

order mystore_after_ms_mysqldrbd Mandatory: ms_mysqldrbd:promote mystore:start

property cib-bootstrap-options: \

        expected-quorum-votes=2 \

        stonith-enabled=false \

        no-quorum-policy=ignore \

        dc-version=1.1.11-97629de \

        cluster-infrastructure=”classic openais (with plugin)”

crm(live)configure# commit

crm(live)configure# exit

查看集群状态

[root@linux235 ~]# crm status

Last updated: Fri Nov 27 17:02:45 2015

Last change: Fri Nov 27 16:44:49 2015

Stack: classic openais (with plugin)

Current DC: linux235 – partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

4 Resources configured

Online: [ linux177 linux235 ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]

     Masters: [ linux235 ]

     Slaves: [ linux177 ]

 mystore        (ocf::heartbeat:Filesystem):    Started linux235

 mysqld (lsb:mysqld):   Started linux235

查看mysql状态,已启动

[root@linux235 ~]# service mysqld status

running process of [rdb]….

ID:1 rdb 15341 1 0 16:44 ? 00:00:01 /home/rdb/bin/mysqld

status OK!

  • 增加vip资源

Node1:

[root@linux235 ~]# crm

crm(live)# configure

crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=”10.47.169.14″ nic=eth0 cidr_netmask=24 op monitor interval=20s timeout=20s

crm(live)configure# colocation vip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master vip

crm(live)configure# verify    

crm(live)configure# show

node linux177

node linux235

primitive mysqld lsb:mysqld \

        op monitor interval=20s timeout=15s

primitive mysqldrbd ocf:linbit:drbd \

        params drbd_resource=mysql \

        op start timeout=240 interval=0 \

        op stop timeout=100 interval=0 \

        op monitor role=Master interval=20 timeout=30 \

        op monitor role=Slave interval=30 timeout=30

primitive mystore Filesystem \

        params device=”/dev/drbd0″ directory=”/data” fstype=ext4 \

        op start timeout=60 interval=0 \

        op stop timeout=60 interval=0 \

        op monitor interval=20s timeout=40s

primitive vip IPaddr \

        params ip=10.47.169.14 nic=eth0 cidr_netmask=24 \

        op monitor interval=20s timeout=20s

ms ms_mysqldrbd mysqldrbd \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

colocation mysqld_with_mystore inf: mysqld mystore

colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master

colocation vip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master vip

order mysqld_after_mystore Mandatory: mystore mysqld

order mystore_after_ms_mysqldrbd Mandatory: ms_mysqldrbd:promote mystore:start

property cib-bootstrap-options: \

        expected-quorum-votes=2 \

        stonith-enabled=false \

        no-quorum-policy=ignore \

        dc-version=1.1.11-97629de \

        cluster-infrastructure=”classic openais (with plugin)”

crm(live)configure# commit

crm(live)configure# exit

查看状态:

[root@linux235 ~]# crm status

Last updated: Mon Nov 30 11:04:05 2015

Last change: Mon Nov 30 11:01:40 2015

Stack: classic openais (with plugin)

Current DC: linux235 – partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

5 Resources configured

Online: [ linux177 linux235 ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]

     Masters: [ linux235 ]

     Slaves: [ linux177 ]

 mystore        (ocf::heartbeat:Filesystem):    Started linux235

 mysqld (lsb:mysqld):   Started linux235

 vip    (ocf::heartbeat:IPaddr):        Started linux235

集群全部配置完成

  • 常见错误
  • 手工脑裂修复

如果在系统日志中发现如下信息:

Split-Brain detected but unresolved, dropping connection!

说明已发生脑裂,并且无法自动修复。需要根据实际需要,丢弃一个节点A上所做的修改,保留另一个节点B上修改的数据。处理步骤如下:

  1. 在A上执行 drbdadm secondary mysql
  2. drbdadm — –discard-my-data connect mysql
  3. 在B上执行drbdadm connect mysql

注:mysql为之前定义的drbd资源名

  • 无法初始化drbd资源

如果在执行drbdadm create-md mysql命令时遇到如下错误:

Device size would be truncated, which

 would corrupt data and result in

 ‘access beyond end of device’ errors.

 You need to either

    * use external meta data (recommended)

    * shrink that filesystem first

    * zero out the device (destroy the filesystem)

 Operation refused.

 Command ‘drbdmeta 0 v08 /dev/hdb1 internal create-md’ terminated with exit coolcode 40

 drbdadm create-md ha: exited with coolcode 40

需要使用dd命令覆盖文件系统中的设备块信息,如下所示:

dd if=/dev/zero bs=1M count=1 of=/dev/sdb2