更新于:2023年12月6日
multipath
目录
[toc]
1、什么是multipath
multipath是linux系统中用于实现设备路径冗余和负载均衡的一种机制。通过multipath机制,能够实现在系统中同一个设备可以存在多条路径,并且可以通过系统内部自己实现的算法,实现这些路径上的读写或访问负载均衡。同时,如果某些路径出现故障,系统也可以自动切换到其他可用的路径上。这样,可以提高系统的可靠性以及存储的效率。
2、multipath配置文件
要配置multipath,需要使用multipath.conf
配置文件。该文件位于/etc/multipath.conf中。该文件中包含了一些关于multipath的基本参数设置,同时还包含了一些不同存储设备的类型信息以及对应的路径优先级,这些参数可以通过修改multipath.conf来实现。
demo1(最小化配置)
[root@docker ~]#cat /etc/multipath.conf
defaults {
user_friendly_names yes
find_multipaths yes
}
demo2
下面是multipath.conf文件的一个示例:
defaults {
user_friendly_names yes
find_multipaths yes
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
rr_min_io 100
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*"
devnode "^cciss!c[0-9]*d[0-9]*"
devnode "^ssv-(.*)(zer|tic)"
}
blacklist_exceptions {
wwid ".*"
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_prio
prio "alua"
features "1 queue_if_no_path"
hardware_handler "1 alua"
}
}
可以看到,multipath.conf文件包括了以下几个部分:
- defaults: 这个部分包含了一些默认设置,如user_friendly_names(指是否启用友好设备名称)、find_multipaths(指是否查找多路径)等。
- blacklist: 这个部分包含了需要屏蔽的设备类型或名称。在上面的示例中,会屏蔽一些设备类型和名称,如ram、raw、loop、fd等。
- blacklist_exceptions: 这个部分包含了不需要屏蔽的设备类型或名称。在上面的示例中,所有的设备都不需要屏蔽。
- devices: 这个部分包含了需要配置的设备信息。在上面的示例中,我们定义了一个设备,并指定了它的厂商、产品名称以及一些其他参数。
demo3
blacklist {
wwid 3600508b1001c044c39717726236c68d5
}
defaults {
user_friendly_names yes
polling_interval 10
queue_without_daemon no
flush_on_last_del yes
checker_timeout 120
}
devices {
device {
vendor "3par8400"
product "HP"
path_grouping_policy asmdisk
no_path_retry 30
prio hp_sw
path_checker tur
path_selector "round-robin 0"
hardware_handler "0"
failback 15
}
}
multipaths {
multipath {
wwid 360002ac0000000000000000300023867
alias mpathdisk01
}
}
如果有两个或者多个就再加一条即可。
multipaths {
multipath {
wwid 360002ac0000000000000000400023867
alias mpathdisk02
}
}
字段解析
这段代码是一个 multipath.conf 配置文件,用于配置 Linux 操作系统中的多路径设备。以下是每个字段的含义:
blacklist:定义了一些被禁用的设备,只要 WWID 匹配了列表中的任何一个,它就会被黑名单所拒绝。
wwid:唯一标识多路径设备的 32 位十六进制字符串。
defaults:定义了一些默认设置,这些设置可以在其他部分被重写。
user_friendly_names:使多路径设备更易于理解和使用。
polling_interval:检查路径状态的频率(以秒为单位)。
queue_without_daemon:定义了当 multipathd 守护程序处于未运行状态时处理 I/O 请求的行为。
flush_on_last_del:在删除最后一个路径时是否刷新 IO 缓存。
checker_timeout:指定检查器超时的时间。
devices:包含一个或多个
device 块,每个块都描述了一个特定的多路径设备。
device:描述了一个多路径设备及其属性。
vendor、product:设备的制造商和产品名称。
path_grouping_policy:指定将路径分组到哪个组中。
no_path_retry:当无法访问某个路径时进行重试的次数。
prio:指定优先级算法,如 alua、emc、hp_sw 等。
path_checker:指定 IO 路径检查器的类型。
path_selector:指定选择路径的算法。例如,“round-robin 0” 表示依次将请求分发到每个路径上。
hardware_handler:指定用于处理硬件错误的脚本或程序。
failback:指定多长时间后进行故障切换。
multipaths:包含一个或多个
multipath 块,每个块都描述了一个设备的多个路径。
alias:为指定的多路径设备定义别名。
prio 是 multipath.conf 配置文件中的一个关键字,表示优先级算法。它可以指定多路径设备使用哪种算法来选择 I/O 请求路径。例如:
prio ==alua==
以上配置指定了使用 Asymmetric Logical Unit Access(ALUA) 算法进行路径选择。这个算法主要用于 SAN 存储环境下,能够更好地处理存储阵列并发访问的问题。
除了 ALUA,还有其他一些可用的优先级算法,如:
emc:用于与 EMC 存储阵列配合使用。
hp_sw:用于与 HP 存储阵列配合使用。
rdac:用于与 LSI 存储阵列配合使用。
如果没有指定 prio 设置,则默认为 const(优先选择第一个路径)算法,或者是上层应用程序自己控制路径选择。
3、命令
multipath工具提供了一些命令行命令来实现对设备路径的操作。以下是一些常用命令:
- multipath -ll: 查看多路径设备的信息。**(常用)**
- multipath -l: 查看当前活动路径的设备信息。
- multipath -F: 刷新multipath状态。**(常用)**
- multipath -f: 阻止设备出现在多路径设备列表中。
- multipath -r: 重新配置multipath。
除此之外,还有很多其他的命令可以使用。可以通过man multipath试图获取更多的信息。
案例:查看多路径设备的信息。(常用)
用以下命令列出系统中所有的多路径设备及其 WWID:
multipath -ll
这个命令将显示多路径设备的别名、WWID 和路径等信息。
[root@NSR-db1 ~]# multipath -ll
mpathd (360060e80072be00000302be000000280) dm-10 HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:3 sdf 8:80 active ready running
|- 7:0:1:3 sdj 8:144 active ready running
|- 8:0:0:3 sdn 8:208 active ready running
`- 8:0:1:3 sdr 65:16 active ready running
mpathc (360060e80072be00000302be00000027f) dm-9 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:2 sde 8:64 active ready running
|- 7:0:1:2 sdi 8:128 active ready running
|- 8:0:0:2 sdm 8:192 active ready running
`- 8:0:1:2 sdq 65:0 active ready running
mpathb (360060e80072be00000302be00000027e) dm-8 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:1 sdd 8:48 active ready running
|- 7:0:1:1 sdh 8:112 active ready running
|- 8:0:0:1 sdl 8:176 active ready running
`- 8:0:1:1 sdp 8:240 active ready running
mpatha (360060e80072be00000302be00000027a) dm-7 HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:0 sdc 8:32 active ready running
|- 7:0:1:0 sdg 8:96 active ready running
|- 8:0:0:0 sdk 8:160 active ready running
`- 8:0:1:0 sdo 8:224 active ready running
[root@NSR-db1 ~]#
案例:刷新multipath状态
multipath -F: 刷新multipath状态。**(常用)**
-v lvl verbosity level
. 0 no output
. 1 print created devmap names only
. 2 default verbosity
. 3 print debug information
#刷新
multipath -F
#刷新
multipath -v2 -F
multipath -v2
#重新扫描设备 multipath -v3
multipath -v3 -F
multipath -v3
[root@NSR-db1 ~]# multipath -F
[root@NSR-db1 ~]# multipath -v2 -F
[root@NSR-db1 ~]# multipath -v3 -F
Dec 05 14:37:55 | set open fds limit to 1048576/1048576
Dec 05 14:37:55 | loading /lib64/multipath/libcheckdirectio.so checker
Dec 05 14:37:55 | loading /lib64/multipath/libprioconst.so prioritizer
Dec 05 14:37:55 | unloading const prioritizer
Dec 05 14:37:55 | unloading directio checker
[root@NSR-db1 ~]#
案例:-v2/-v3
打印信息
-v lvl verbosity level
. 0 no output
. 1 print created devmap names only
. 2 default verbosity
. 3 print debug information
[root@NSR-db1 ~]# multipath -v2
create: mpatha (360060e80072be00000302be00000027a) undef HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
|- 7:0:0:0 sdc 8:32 undef ready running
|- 7:0:1:0 sdg 8:96 undef ready running
|- 8:0:0:0 sdk 8:160 undef ready running
`- 8:0:1:0 sdo 8:224 undef ready running
create: mpathb (360060e80072be00000302be00000027e) undef HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
|- 7:0:0:1 sdd 8:48 undef ready running
|- 7:0:1:1 sdh 8:112 undef ready running
|- 8:0:0:1 sdl 8:176 undef ready running
`- 8:0:1:1 sdp 8:240 undef ready running
create: mpathc (360060e80072be00000302be00000027f) undef HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
|- 7:0:0:2 sde 8:64 undef ready running
|- 7:0:1:2 sdi 8:128 undef ready running
|- 8:0:0:2 sdm 8:192 undef ready running
`- 8:0:1:2 sdq 65:0 undef ready running
create: mpathd (360060e80072be00000302be000000280) undef HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=undef
`-+- policy='service-time 0' prio=1 status=undef
|- 7:0:0:3 sdf 8:80 undef ready running
|- 7:0:1:3 sdj 8:144 undef ready running
|- 8:0:0:3 sdn 8:208 undef ready running
`- 8:0:1:3 sdr 65:16 undef ready running
[root@NSR-db1 ~]#
[root@NSR-db1 ~]# multipath -v3
Dec 05 14:39:09 | set open fds limit to 1048576/1048576
Dec 05 14:39:09 | loading /lib64/multipath/libcheckdirectio.so checker
Dec 05 14:39:09 | loading /lib64/multipath/libprioconst.so prioritizer
Dec 05 14:39:09 | sda: not found in pathvec
Dec 05 14:39:09 | sda: mask = 0x3f
Dec 05 14:39:09 | sda: dev_t = 8:0
Dec 05 14:39:09 | sda: size = 937406464
Dec 05 14:39:09 | sda: vendor = PM8060-
Dec 05 14:39:09 | sda: product = 1
Dec 05 14:39:09 | sda: rev = V1.0
Dec 05 14:39:09 | sda: h:b:t:l = 0:0:0:0
Dec 05 14:39:09 | sda: path state = running
Dec 05 14:39:09 | sda: 58350 cyl, 255 heads, 63 sectors/track, start at 0
Dec 05 14:39:09 | sda: serial = E274D1D3
Dec 05 14:39:09 | sda: get_state
Dec 05 14:39:09 | sda: detect_checker = 1 (config file default)
Dec 05 14:39:09 | sda: path checker = directio (internal default)
Dec 05 14:39:09 | sda: checker timeout = 45000 ms (sysfs setting)
Dec 05 14:39:09 | directio: starting new request
Dec 05 14:39:09 | directio: io finished 4096/0
Dec 05 14:39:09 | sda: directio state = up
Dec 05 14:39:09 | sda: uid_attribute = ID_SERIAL (internal default)
Dec 05 14:39:09 | sda: uid = 2d3d174e200d00000 (udev)
Dec 05 14:39:09 | sda: detect_prio = 1 (config file default)
Dec 05 14:39:09 | sda: prio = const (internal default)
Dec 05 14:39:09 | sda: prio args = (internal default)
Dec 05 14:39:09 | sda: const prio = 1
Dec 05 14:39:09 | sdb: not found in pathvec
Dec 05 14:39:09 | sdb: mask = 0x3f
Dec 05 14:39:09 | sdb: dev_t = 8:16
Dec 05 14:39:09 | sdb: size = 4676648960
Dec 05 14:39:09 | sdb: vendor = PM8060-
Dec 05 14:39:09 | sdb: product = raid10
Dec 05 14:39:09 | sdb: rev = V1.0
Dec 05 14:39:09 | sdb: h:b:t:l = 0:0:1:0
Dec 05 14:39:09 | sdb: path state = running
案例:查看当前活动路径的设备信息
multipath -l: 查看当前活动路径的设备信息。
[root@NSR-db1 ~]# multipath -l
mpathd (360060e80072be00000302be000000280) dm-10 HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:3 sdf 8:80 active undef running
|- 7:0:1:3 sdj 8:144 active undef running
|- 8:0:0:3 sdn 8:208 active undef running
`- 8:0:1:3 sdr 65:16 active undef running
mpathc (360060e80072be00000302be00000027f) dm-9 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:2 sde 8:64 active undef running
|- 7:0:1:2 sdi 8:128 active undef running
|- 8:0:0:2 sdm 8:192 active undef running
`- 8:0:1:2 sdq 65:0 active undef running
mpathb (360060e80072be00000302be00000027e) dm-8 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:1 sdd 8:48 active undef running
|- 7:0:1:1 sdh 8:112 active undef running
|- 8:0:0:1 sdl 8:176 active undef running
`- 8:0:1:1 sdp 8:240 active undef running
mpatha (360060e80072be00000302be00000027a) dm-7 HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:0 sdc 8:32 active undef running
|- 7:0:1:0 sdg 8:96 active undef running
|- 8:0:0:0 sdk 8:160 active undef running
`- 8:0:1:0 sdo 8:224 active undef running
[root@NSR-db1 ~]#
案例:查看硬盘的 WWID
使用以下命令来查看硬盘的 WWID:
sudo udevadm info --query=all --name=/dev/sdX | grep ID_SERIAL
将 /dev/sdX 替换为您要查看的磁盘设备,例如 /dev/sda 或 /dev/sdb。该命令将打印出设备的所有属性,然后使用 grep 命令过滤出包含 ID_SERIAL 的行,从而找到设备的 WWID。
案例:查看状态 multipath -d -l
-d dry run, do not create or update devmaps
[root@NSR-db1 ~]# multipath -d -l
mpathd (360060e80072be00000302be000000280) dm-10 HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:3 sdf 8:80 active undef running
|- 7:0:1:3 sdj 8:144 active undef running
|- 8:0:0:3 sdn 8:208 active undef running
`- 8:0:1:3 sdr 65:16 active undef running
mpathc (360060e80072be00000302be00000027f) dm-9 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:2 sde 8:64 active undef running
|- 7:0:1:2 sdi 8:128 active undef running
|- 8:0:0:2 sdm 8:192 active undef running
`- 8:0:1:2 sdq 65:0 active undef running
mpathb (360060e80072be00000302be00000027e) dm-8 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:1 sdd 8:48 active undef running
|- 7:0:1:1 sdh 8:112 active undef running
|- 8:0:0:1 sdl 8:176 active undef running
`- 8:0:1:1 sdp 8:240 active undef running
mpatha (360060e80072be00000302be00000027a) dm-7 HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
|- 7:0:0:0 sdc 8:32 active undef running
|- 7:0:1:0 sdg 8:96 active undef running
|- 8:0:0:0 sdk 8:160 active undef running
`- 8:0:1:0 sdo 8:224 active undef running
[root@NSR-db1 ~]#
4、==工作实战==
这个是经过工作实际测试过的。
实战:multipath配置-2023.12.5(测试成功)
环境:
centos7.9
默认情况下,linux是没安装multipath服务的,需要我们手动安装。
1、安装
- 安装multipath服务
yum install device-mapper-multipath
⚠ 注意:
注意:自己经实际测试,这里是不用将多路径软件添加至内核模块中,默认会自动添加的(这里仅做记录)
[root@docker ~]#lsmod |grep multipath
dm_multipath 27792 0
dm_mod 128595 10 dm_multipath,dm_log,dm_mirror
2、配置
- 安装multipath软件后的现象
[root@docker ~]#multipath -ll
Dec 05 08:41:19 | DM multipath kernel driver not loaded
Dec 05 08:41:19 | /etc/multipath.conf does not exist, blacklisting all devices.
Dec 05 08:41:19 | A default multipath.conf file is located at
Dec 05 08:41:19 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Dec 05 08:41:19 | You can run /sbin/mpathconf --enable to create
Dec 05 08:41:19 | /etc/multipath.conf. See man mpathconf(8) for more details
Dec 05 08:41:19 | DM multipath kernel driver not loaded
[root@docker ~]#
- 继续处理,我们来看下提示文件
/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
的内容
[root@docker ~]#cat /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page
### By default, devices with vendor = "IBM" and product = "S/390.*" are
### blacklisted. To enable mulitpathing on these devies, uncomment the
### following lines.
#blacklist_exceptions {
# device {
# vendor "IBM"
# product "S/390.*"
# }
#}
### Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
}
###
### Here is an example of how to configure some standard options.
###
#
#defaults {
# polling_interval 10
# path_selector "round-robin 0"
# path_grouping_policy multibus
# uid_attribute ID_SERIAL
# prio alua
# path_checker readsector0
# rr_min_io 100
# max_fds 8192
# rr_weight priorities
# failback immediate
# no_path_retry fail
# user_friendly_names yes
#}
###
### The wwid line in the following blacklist section is shown as an example
### of how to blacklist devices by wwid. The 2 devnode lines are the
### compiled in default blacklist. If you want to blacklist entire types
### of devices, such as all scsi devices, you should use a devnode line.
### However, if you want to blacklist specific devices, you should use
### a wwid line. Since there is no guarantee that a specific device will
### not change names on reboot (from /dev/sda to /dev/sdb for example)
### devnode lines are not recommended for blacklisting specific devices.
###
#blacklist {
# wwid 26353900f02796769
# devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
# devnode "^hd[a-z]"
#}
#multipaths {
# multipath {
# wwid 3600508b4000156d700012000000b0000
# alias yellow
# path_grouping_policy multibus
# path_selector "round-robin 0"
# failback manual
# rr_weight priorities
# no_path_retry 5
# }
# multipath {
# wwid 1DEC_____321816758474
# alias red
# }
#}
#devices {
# device {
# vendor "COMPAQ "
# product "HSV110 (C)COMPAQ"
# path_grouping_policy multibus
# path_checker readsector0
# path_selector "round-robin 0"
# hardware_handler "0"
# failback 15
# rr_weight priorities
# no_path_retry queue
# }
# device {
# vendor "COMPAQ "
# product "MSA1000 "
# path_grouping_policy multibus
# }
#}
(默认multipath文件已经存在系统上了,需要我们使用以下命令来启用它)
[root@docker ~]#/sbin/mpathconf --enable
我们再次看下multipath的配置文件:(==本次为最小化配置==)
[root@docker ~]#cat /etc/multipath.conf
……
defaults {
user_friendly_names yes
find_multipaths yes
}
……
3、启动并查看multipath服务
[root@docker ~]#systemctl enable multipathd --now
[root@docker ~]#systemctl status multipathd
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since 二 2023-12-05 08:45:24 CST; 1s ago
Process: 16025 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
Process: 16022 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
Process: 16020 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
Main PID: 16028 (multipathd)
Tasks: 6
Memory: 2.2M
CGroup: /system.slice/multipathd.service
└─16028 /sbin/multipathd
12月 05 08:45:24 docker systemd[1]: Starting Device-Mapper Multipath Device Controller...
12月 05 08:45:24 docker systemd[1]: Started Device-Mapper Multipath Device Controller.
12月 05 08:45:24 docker multipathd[16028]: path checkers start up
###重启服务命令
##systemctl restart multipathd
##centos6用以下方式重启服务
##/etc/init.d/multipathd restart
- centos6 用以下方式
查看启动级别
chkconfig --list|grep multipathd
multipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
配置启动级别
chkconfig --level 2345 multipathd on
chkconfig --list|grep multipathd
multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
4、验证多路径
[root@docker ~]#multipath -ll
如果出现“ACTIVE”状态的磁盘,则意味着Multipath已经成功地配置并且工作正常。
⚠ 注意:
以下2点仅做记录,自己测试无需配置以下2点,multipath服务也是可以正常生效的。
更新Initramfs:
在修改了Multipath配置之后,可能需要更新Initramfs以确保内核在启动时正确加载Multipath设置:
sudo dracut -f
重启系统:
为了应用所有更改,最好重新启动系统:
sudo reboot
5、交付前信息收集
- 2台机器信息
- node1
[root@NSR-db1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 447G 0 disk
├─sda1 8:1 1 1000M 0 part /boot
└─sda2 8:2 1 446G 0 part
├─sys_vg00-root_lv00 253:0 0 100G 0 lvm /
├─sys_vg00-swap_lv00 253:1 0 64G 0 lvm [SWAP]
├─sys_vg00-usr_lv00 253:2 0 10G 0 lvm /usr
├─sys_vg00-opt_lv00 253:3 0 20G 0 lvm /opt
├─sys_vg00-tmp_lv00 253:4 0 10G 0 lvm /tmp
├─sys_vg00-var_lv00 253:5 0 50G 0 lvm /var
└─sys_vg00-home_lv00 253:6 0 100G 0 lvm /home
sdb 8:16 1 2.2T 0 disk
└─sdb1 8:17 1 2.2T 0 part /data1
sdc 8:32 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdd 8:48 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sde 8:64 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdf 8:80 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdg 8:96 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdh 8:112 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdi 8:128 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdj 8:144 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdk 8:160 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdl 8:176 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdm 8:192 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdn 8:208 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdo 8:224 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdp 8:240 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdq 65:0 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdr 65:16 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
[root@NSR-db1 ~]#
[root@NSR-db1 ~]# multipath -ll
mpathd (360060e80072be00000302be000000280) dm-10 HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:3 sdf 8:80 active ready running
|- 7:0:1:3 sdj 8:144 active ready running
|- 8:0:0:3 sdn 8:208 active ready running
`- 8:0:1:3 sdr 65:16 active ready running
mpathc (360060e80072be00000302be00000027f) dm-9 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:2 sde 8:64 active ready running
|- 7:0:1:2 sdi 8:128 active ready running
|- 8:0:0:2 sdm 8:192 active ready running
`- 8:0:1:2 sdq 65:0 active ready running
mpathb (360060e80072be00000302be00000027e) dm-8 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:1 sdd 8:48 active ready running
|- 7:0:1:1 sdh 8:112 active ready running
|- 8:0:0:1 sdl 8:176 active ready running
`- 8:0:1:1 sdp 8:240 active ready running
mpatha (360060e80072be00000302be00000027a) dm-7 HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:0 sdc 8:32 active ready running
|- 7:0:1:0 sdg 8:96 active ready running
|- 8:0:0:0 sdk 8:160 active ready running
`- 8:0:1:0 sdo 8:224 active ready running
[root@NSR-db1 ~]# ls /dev/sd
sda sda1 sda2 sdb sdb1 sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo sdp sdq sdr
[root@NSR-db1 ~]# ls /dev/sd
[root@NSR-db1 ~]# fdisk -l
Disk /dev/sda: 480.0 GB, 479952109568 bytes, 937406464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d0e82
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2050047 1024000 83 Linux
/dev/sda2 2050048 937406463 467678208 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdb: 2394.4 GB, 2394444267520 bytes, 4676648960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: C94C497D-46C4-47D8-85B6-3C28768F5470
# Start End Size Type Name
1 34 4676647007 2.2T Microsoft basic primary
Disk /dev/mapper/sys_vg00-root_lv00: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-swap_lv00: 68.7 GB, 68719476736 bytes, 134217728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-usr_lv00: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-opt_lv00: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-tmp_lv00: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-var_lv00: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/sys_vg00-home_lv00: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdh: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdi: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdj: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdk: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdl: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdm: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdn: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdo: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdp: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdq: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdr: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/mpatha: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/mpathb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/mpathc: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/mpathd: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@NSR-db1 ~]#
- node2
[root@NSR-db2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 447G 0 disk
├─sda1 8:1 1 1000M 0 part /boot
└─sda2 8:2 1 446G 0 part
├─sys_vg00-root_lv00 253:0 0 100G 0 lvm /
├─sys_vg00-swap_lv00 253:1 0 64G 0 lvm [SWAP]
├─sys_vg00-usr_lv00 253:2 0 10G 0 lvm /usr
├─sys_vg00-opt_lv00 253:3 0 20G 0 lvm /opt
├─sys_vg00-tmp_lv00 253:4 0 10G 0 lvm /tmp
├─sys_vg00-var_lv00 253:5 0 50G 0 lvm /var
└─sys_vg00-home_lv00 253:6 0 100G 0 lvm /home
sdb 8:16 1 3.3T 0 disk
└─sdb1 8:17 1 3.3T 0 part /data1
sdc 8:32 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdd 8:48 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sde 8:64 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdf 8:80 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdg 8:96 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdh 8:112 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdi 8:128 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdj 8:144 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdk 8:160 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdl 8:176 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdm 8:192 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdn 8:208 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
sdo 8:224 0 1000G 0 disk
└─mpatha 253:7 0 1000G 0 mpath
sdp 8:240 0 200G 0 disk
└─mpathb 253:8 0 200G 0 mpath
sdq 65:0 0 200G 0 disk
└─mpathc 253:9 0 200G 0 mpath
sdr 65:16 0 30G 0 disk
└─mpathd 253:10 0 30G 0 mpath
[root@NSR-db2 ~]#
[root@NSR-db2 ~]# multipath -ll
mpathd (360060e80072be00000302be000000280) dm-10 HITACHI ,OPEN-V
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:3 sdf 8:80 active ready running
|- 7:0:1:3 sdj 8:144 active ready running
|- 8:0:0:3 sdn 8:208 active ready running
`- 8:0:1:3 sdr 65:16 active ready running
mpathc (360060e80072be00000302be00000027f) dm-9 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:2 sde 8:64 active ready running
|- 7:0:1:2 sdi 8:128 active ready running
|- 8:0:0:2 sdm 8:192 active ready running
`- 8:0:1:2 sdq 65:0 active ready running
mpathb (360060e80072be00000302be00000027e) dm-8 HITACHI ,OPEN-V
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:1 sdd 8:48 active ready running
|- 7:0:1:1 sdh 8:112 active ready running
|- 8:0:0:1 sdl 8:176 active ready running
`- 8:0:1:1 sdp 8:240 active ready running
mpatha (360060e80072be00000302be00000027a) dm-7 HITACHI ,OPEN-V
size=1000G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 7:0:0:0 sdc 8:32 active ready running
|- 7:0:1:0 sdg 8:96 active ready running
|- 8:0:0:0 sdk 8:160 active ready running
`- 8:0:1:0 sdo 8:224 active ready running
[root@NSR-db2 ~]#
说明
这个只是一个简单的multipath服务配置,具体配置细节会由业务dba去配置的,这里只是检查底层zone配置过来的硬盘路径是否一致。
参考
https://www.python100.com/html/MT38R0OB557Y.html
https://blog.csdn.net/m0_61069946/article/details/131043755
关于我
我的博客主旨:
- 排版美观,语言精炼;
- 文档即手册,步骤明细,拒绝埋坑,提供源码;
- 本人实战文档都是亲测成功的,各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人帮您解决问题,让我们一起进步!
🍀 微信二维码
x2675263825 (舍得), qq:2675263825。
🍀 微信公众号
《云原生架构师实战》
🍀 个人博客站点
🍀 语雀
https://www.yuque.com/xyy-onlyone
🍀 csdn
https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421
🍀 知乎
https://www.zhihu.com/people/foryouone
最后
好了,关于本次就到这里了,感谢大家阅读,最后祝大家生活快乐,每天都过的有意义哦,我们下期见!