Usually when our Oracle VM pool needs more disk space we create new repositories and then we proceed to create our new guests or we extend our virtual-disks.
Sometimes customers ask us to extend one repository instead of create a new one.
I would like to clarify that add one repository is always safer than extend an existing one; this blog post is purely demonstrative that "extend" is possible and it works.
This guide can be used on releases 3.0, 3.1.1 and 3.2.1.
In this example we will extend a repository of 250 GB to 300 GB. Here you'll find the four easy steps:
1. Extend the lun on the storage ( .... )
Identify the physical device to extend:
[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/3600143801259961a0000800001170000
262144000 235312128 26831872 90% /OVS/Repositories/0004fb0000030000480c7108c43bcaaa
[root@ovm01 scripts]# multipath -ll /dev/mapper/3600143801259961a0000800001170000
3600143801259961a0000800001170000 dm-47 HP,HSV360
size=250G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:29 sdhr 134:16 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:1:29 sdhs 134:32 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:2:29 sdht 134:48 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:3:29 sdhu 134:64 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:1:29 sdhw 134:96 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:0:29 sdhv 134:80 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:2:29 sdhx 134:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 3:0:3:29 sdhy 134:128 active ready running
By your storage admin tool ( also by Oracle VM plugin for your storage ) extend your physical device that hosts the repository.
2. Refresh physical size of your storage device
A little code string that can help to prepare the job is:
# for i in `multipath -ll /dev/mapper/3600143801259961a0000800001170000| grep sd |sed -e 's:^..............::g' |awk '{print $1}'`; do echo "blockdev --rereadpt /dev/$i" ; doneYou will get an output similar to that show below:
# blockdev --rereadpt /dev/sdhr# blockdev --rereadpt /dev/sdhs
# blockdev --rereadpt /dev/sdht
# blockdev --rereadpt /dev/sdhu
# blockdev --rereadpt /dev/sdhw
# blockdev --rereadpt /dev/sdhv
# blockdev --rereadpt /dev/sdhx
# blockdev --rereadpt /dev/sdhy
Verify your output and check that the device paths are what you expect; after that execute it on your Oracle VM Server.
Remember to execute this step on all your OracleVM Servers (OVS) that are part of your pool!!!
3. Refresh physical size of your multipath device
After refresh of all physical paths ( sd* in this case ) you have to refresh and verify the new size of the multipath-device; to accomplish this step execute the command above:
[root@ovm01 scripts]# multipathd -k"resize map 3600143801259961a0000800001170000"ok
[root@ovmgv02 scripts]# multipath -ll /dev/mapper/3600143801259961a0000800001170000
3600143801259961a0000800001170000 dm-47 HP,HSV360
size=300G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:29 sdhr 134:16 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:1:29 sdhs 134:32 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:2:29 sdht 134:48 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 2:0:3:29 sdhu 134:64 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:1:29 sdhw 134:96 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:0:29 sdhv 134:80 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:2:29 sdhx 134:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 3:0:3:29 sdhy 134:128 active ready running
Remember to execute this step on all your OracleVM Servers (OVS) that are part of your pool!!!
4. Verify actual repository size and extend the ocfs2 filesystem
View actual repository size:
[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/3600143801259961a0000800001170000
262144000 235312128 26831872 90% /OVS/Repositories/0004fb0000030000480c7108c43bcaaa
Extend ocfs2 filesystem:
comment on "man" of tunefs.ocfs2:option -S, --volume-size
Grow the size of theOCFS2file system. Ifblocks-countis not specified,tunefs.ocfs2extends the volume to the current size of the device.
So:
[root@ovm01 ~]# tunefs.ocfs2 -S /dev/mapper/3600143801259961a0000800001170000NB: just run the command "tunefs.ocfs2" once, on one node part of the ocfs2 cluster.
View new repository size:
[root@ovm01 ~]# df -k /dev/mapper/3600143801259961a0000800001170000 Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/3600143801259961a0000800001170000314572800 235315200 79257600 75% /OVS/Repositories/0004fb0000030000480c7108c43bcaaaThis article is dedicated to the person who keeps me up at night, little Joseph, my son, born January 9, 2013.Comments and corrections are welcome.
Simon COTER