add device called lvdump1
[root@stlam61p]
:/home/root # lvchange -r n /dev/vg00/lvdump1
lvcreate -L <size in MB> -n lvdump1 /dev/vg00
# lvchange -C y /dev/vg00/lvdump1
Logical volume “/dev/vg00/lvdump1” has been successfully changed.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
# crashconf -a /dev/vg00/lvdump1
:/home/root # crashconf -v
claimed on ioscan
fuser -c shows clean
drd is insists the disk is busy
DRD crashed causing the issue.
Don’t want to reboot that is an admission of defeat.
ERROR: Analysis of file system creation fails.
– Analysis of target fails.
– Analysis of the configuration with disk “/dev/disk/disk143” fails.
– The analysis step for creation of an inactive system image failed.
– The default DRD mount point “/var/opt/drd/mnts/sysimage_001/” cannot be used due to the following error(s):
– The mount point /var/opt/drd/mnts/sysimage_001/ is not an empty directory as required.
* Analyzing For System Image Cloning failed with 1 error.
* DRD operation failed, contents of /var/opt/drd/tmp copied to /var/opt/drd/save.
======= 08/13/18 06:39:16 EDT END Clone System Image failed with 1 error. (user=hcladmin) (jobid=ohonq001)
cd /var/opt/drd/mnts/
rm -rf *
scsimgr clear_kmstat -D /dev/rdisk/disk143
scsimgr: Cleared the Kmetric data successfully
DRD nirvana.
If this solution helped you consider making a donation to support the site:
Tags: drd, hpux 11.31, LVM, scsimgr
On LVM 1.0 Volume group, the task is no downtime storage migration. Hitachi to Pure Solid State storage. Mirror/UX required. Disks are almostthe same size: dbrestore:root > diskinfo /dev/rdisk/disk42 SCSI describe of /dev/rdisk/disk42: vendor: HITACHI product id: OPEN-V type: direct access size: 16777216 Kbytes bytes per sector: 512 dbrestore:root > diskinfo /dev/rdisk/disk52 SCSI describe of /dev/rdisk/disk52: vendor: PURE product id: FlashArray type: direct access size: 10485760 Kbytes bytes per sector: 512 pvcreate /dev/rdisk/disk52 vgextend /dev/vgtest /dev/disk/disk52 Before state: dbrestore:root > vgdisplay -v vgtest --- Volume groups --- VG Name /dev/vgtest VG Write Access read/write VG Status available Max LV 255 Cur LV 1 Open LV 1 Max PV 16 Cur PV 2 Act PV 2 Max PE per PV 4095 VGDA 4 PE Size (Mbytes) 4 Total PE 6654 Alloc PE 1024 Free PE 5630 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0 VG Version 1.0 VG Max Size 262080m VG Max Extents 65520 --- Logical volumes --- LV Name /dev/vgtest/lvtest LV Status available/syncd LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Used PV 1 --- Physical volumes --- PV Name /dev/disk/disk42 PV Status available Total PE 4095 Free PE 4095 Autoswitch On Proactive Polling On PV Name /dev/disk/disk52 PV Status available Total PE 2559 Free PE 1535 Autoswitch On Proactive Polling On dbrestore:root > ioscan -NfnCdisk /dev/disk/disk42 Class I H/W Path Driver S/W State H/W Type Description =================================================================== disk 42 64000/0xfa00/0x21 esdisk CLAIMED DEVICE HITACHI OPEN-V /dev/disk/disk42 /dev/rdisk/disk42 dbrestore:root > ioscan -NfnCdisk /dev/disk/disk52 Class I H/W Path Driver S/W State H/W Type Description =================================================================== disk 52 64000/0xfa00/0x35 esdisk CLAIMED DEVICE PURE FlashArray /dev/disk/disk52 /dev/rdisk/disk52 dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk42 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 00000 /dev/disk/disk42 00000 current 00001 /dev/disk/disk42 00001 current 00002 /dev/disk/disk42 00002 current ... 01022 /dev/disk/disk42 01022 current 01023 /dev/disk/disk42 01023 current dbrestore:root > lvextend -m 1 /dev/vgtest/lvtest /dev/disk/disk52 The newly allocated mirrors are now being synchronized.This operation will take some time. Please wait .... Logical volume "/dev/vgtest/lvtest" has been successfully extended. Volume Group configuration for /dev/vgtest has been saved in /etc/lvmconf/vgtest.conf dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 1 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 2048 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk42 1024 1024 /dev/disk/disk52 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 PV2 PE2 Status 2 00000 /dev/disk/disk42 00000 current /dev/disk/disk52 00000 current 00001 /dev/disk/disk42 00001 current /dev/disk/disk52 00001 current 00002 /dev/disk/disk42 00002 current /dev/disk/disk52 00002 current ... 01023 /dev/disk/disk42 01023 current /dev/disk/disk52 01023 current dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvreduce -m 0 /dev/vgtest/lvtest /dev/disk/disk42 Logical volume "/dev/vgtest/lvtest" has been successfully reduced. Volume Group configuration for /dev/vgtest has been saved in /etc/lvmconf/vgtest.conf dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk52 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 00000 /dev/disk/disk52 00000 current 00001 /dev/disk/disk52 00001 current ... 01023 /dev/disk/disk52 01023 current dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root >
Hitachi shops faced annoyance times two:
1. xpinfo does not work on non-Hitachi storage for example Pure storage
2. xpinfo does not work on hpvm guests depending on how the storage is passed through from the hpvm host
I now present xpinfonew which though raw and unfnished
The output:
myserv0:root > ./xpinfonew
Device path ldev
==========================================================================
/dev/rdisk/disk111 =:=
/dev/rdisk/disk12 30:86
/dev/rdisk/disk172 03:f3
/dev/rdisk/disk215 46:2c
/dev/rdisk/disk216 46:30
/dev/rdisk/disk217 46:34
/dev/rdisk/disk218 46:38
/dev/rdisk/disk219 46:28
/dev/rdisk/disk220 46:25
/dev/rdisk/disk221 46:27
/dev/rdisk/disk222 46:2a
/dev/rdisk/disk223 46:2e
/dev/rdisk/disk224 46:32
/dev/rdisk/disk225 46:2b
/dev/rdisk/disk226 46:2f
/dev/rdisk/disk227 46:33
/dev/rdisk/disk237 46:37
/dev/rdisk/disk238 46:36
/dev/rdisk/disk239 46:26
/dev/rdisk/disk240 46:29
/dev/rdisk/disk241 46:2d
/dev/rdisk/disk242 46:31
/dev/rdisk/disk243 46:35
/dev/rdisk/disk244 46:39
/dev/rdisk/disk4 aa:bf
/dev/rdisk/disk5 8b:c3
/dev/rdisk/disk6 03:a6
/dev/rdisk/disk9 01:00
myserv0:root > ./xpinfonew raw
Device path ldev
==========================================================================
/dev/rdisk/disk111 =
/dev/rdisk/disk12 3086
/dev/rdisk/disk172 03f3
/dev/rdisk/disk215 462c
/dev/rdisk/disk216 4630
/dev/rdisk/disk217 4634
/dev/rdisk/disk218 4638
/dev/rdisk/disk219 4628
/dev/rdisk/disk220 4625
/dev/rdisk/disk221 4627
/dev/rdisk/disk222 462a
/dev/rdisk/disk223 462e
/dev/rdisk/disk224 4632
/dev/rdisk/disk225 462b
/dev/rdisk/disk226 462f
/dev/rdisk/disk227 4633
/dev/rdisk/disk237 4637
/dev/rdisk/disk238 4636
/dev/rdisk/disk239 4626
/dev/rdisk/disk240 4629
/dev/rdisk/disk241 462d
/dev/rdisk/disk242 4631
/dev/rdisk/disk243 4635
/dev/rdisk/disk244 4639
/dev/rdisk/disk4 aabf
/dev/rdisk/disk5 8bc3
/dev/rdisk/disk6 03a6
/dev/rdisk/disk9 0100
cat xpinfonew
#!/bin/ksh
# Get ldev from any disk regardless of storage provider
#
# 10/26/2017 Steven “Shmuel” Protter steven.protter@hcl.com
#
echo “Device path \t\t ldev ”
echo “==========================================================================”
ioscan -NfnCdisk | awk ‘/rdisk/{ print $(NF) }’ | awk -F_ ‘{ print $1 }’ | sort -u |while read -r dv
do
ldev=$(/var/adm/bin/getldev.ksh ${dv} ${1} );
echo “${dv} \t ${ldev}”
done
The code:
cat /var/adm/bin/getldev.ksh
#!/bin/ksh
# Get ldev from any disk regardless of storage provider
#
# 10/26/2017 Steven “Shmuel” Protter steven.protter@hcl.com
#
argies=$#
if [ $argies -eq 0 ]
then
echo “———— 1 argument required device path ex: /dev/rdisk/disk101 ————-”
exit 1
fi
dv=$1
fmt=$2
## /usr/sbin/scsimgr lun_map -D ${dv} | awk ‘/World Wide Identifier/{ print $(NF) }’
rldev=$(/usr/sbin/scsimgr lun_map -D ${dv} | awk ‘/World Wide Identifier/{ print substr ( $NF, length($NF) – 3, length($NF) ) }’);
l1=$(echo ${rldev} | awk ‘{ print substr ( $NF, length($NF) – 3, 2 ) }’);
l2=$(echo ${rldev} | awk ‘{ print substr ( $NF, length($NF) – 1, length($NF) ) }’);
### echo “raw: ${rldev} l1: ${l1} l2: ${l2} …”
if [ “$fmt” = “raw” ]
then
echo ${rldev}
else
echo “${l1}:${l2}”
fi
Should work on any SAN based storage
Tags: HP-UX, hpux, storage ldev, storage ldev works in hpvm guests, xpinfo improvement
San boot system.
HBA has to be replaced.
Then you have to boot single user mode to re-establish all your paths.
Procedure authored by my colleague Mahesh Koduru
Before you start make sure you have a current map file hosted on root filesystem.
“Reboot the server and follow these steps :
Interrupt the boot and boot the system in maintenance mode.
fs0:\EFI\HPUX> hpux
HPUX> boot -lm –lq vmunix This will bring the System into Maintance mod
#vgdisplay vg00 The VG should be in deactivated Mod
#ll /dev/*/group Collect the Group file
#vgexport -p -s -m vg00.map /dev/vg00 Keep the Map file in present directory , ie root
# ll vg00.map
#vgexport -v /dev/vg00
#mkdir -m 755 /dev/vg00
#mknod /dev/vg00/group c 64 0x030000 ##This is an example your major/minor number may vary
#vgimport -s -N -m vg00.map /dev/vg00 ## The -N is B.11.31 only to convert to agile storage
#vgchange -a y vg00
#mount -a Mount only Root filesystems
#setboot Check and correct setboot issues
#lvlnboot -v vg00 Check and correct lvlnboot issues , lvrmboot command can be used if needed
#/usr/sbin/lvlnboot -v
# lvlnboot -r /dev/vg00 Execute for fixing Boot Labels
# lvlnboot -r /dev/vg00/lvol3
# lvlnboot -b /dev/vg00/lvol1
# lvlnboot -s /dev/vg00/lvol2
# lvlnboot -d /dev/vg00/lvol2
# lvlnboot -v
# lvlnboot -R
Note: Comment the swap in fstab and then issue below command.
# shutdown -ry 0
”
Ever had a problem equating your disk device output for boot disks to what you see on the EFI prompt?
map -r
?????
# ioscan -fneCdisk
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
disk 0 0/2/1/0.0.0.0.0 sdisk CLAIMED DEVICE HP DG0300BALVP
/dev/dsk/c0t0d0 /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s2
/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0 /dev/rdsk/c0t0d0s3
/dev/dsk/c0t0d0s2 /dev/rdsk/c0t0d0s1
Acpi(HWP0002,PNP0A03,200)/Pci(1|0)/Sas(Addr5000C5000A9C4FC9, Lun0)/HD(Part1,SigD061ACEA-862A-11DE-8000-D6217B60E588)/\EFI\HPUX\HPUX.EFI
Note: EFI is not agile aware you get nothing useful by trying ioscan -NfneCdisk
Tags: efi, efi not agile aware, iioscan, making ioscan output match efi output
Thanks to Veerappan Dhandapani of HCL Technologies for bringing me this problem
You have 50 GB free in the volume group.
You try to extend the logical volume and you cant.
Basics:
1) bdf is the tool for measuring filesystem size not logical volume size
2)lvdisplay is the tool for measuring logical volume size.
Steps:
1) Extend the logical volume. That is why the previous two number points are important. In process after extending the logical volume size the lv size and file system size are not equal
2) Extend the file system.
root@mybox# vgdisplay -v vgdata | grep -i “Free PE”
Free PE 1784
Free PE 511
Free PE 0
Free PE 0
Free PE 0
Free PE 0
Free PE 1273
Free PE 0
root@mybox# lvdisplay /dev/vgdata/lv_prod_datastaging
— Logical volumes —
LV Name /dev/vgdata/lv_prod_datastaging
VG Name /dev/vgdata
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 245760
Current LE 7680
Allocated PE 7680
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation PVG-strict
IO Timeout (Seconds) default
root@mybox# lvextend -L 278528 /dev/vgdata/lv_prod_datastaging
lvextend: Not enough free physical extents available.
The problem here is the allocation policy.
lvchange -C n -s n /dev/vgdata/lv_prod_datastaging
See the man page for details. -C is for contiguous -s is for strict allocation.
We changed both to n(o).
Off camera we extended the logcal volume
lvdisplay /dev/vgdata/lv_prod_datastaging
— Logical volumes —
LV Name /dev/vgdata/lv_prod_datastaging
VG Name /dev/vgdata
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 302848
Current LE 9464
Allocated PE 9464
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation non-strict
IO Timeout (Seconds) default
We get the size from the logical volume size to feed into the Online JFS extend command so we do not have to do math.
fsadm -F vxfs -b 302848m /prod/datastaging
bdfmegs is Bill Hassell’s new and improved bdf
/var/adm/bin/bdfmegs /prod/datastaging
File-System Mbytes Used Avail %Used Mounted on
/dev/vgdata/lv_prod_datastaging 310.1g 215.7g 94.0g 70% /prod/datastaging
Tags: bdf, fsadm, logical volume manager, lvchange, LVM, no math fsadm, onlin JFS, strict allocation change
This applies to hpvm, definitely version 4.00, probably all the way through version 6.2. HP-UX 11.31
You have a HPVM host named hpvm1 and it has a guest dguest1
Storage team presents you the storage.
ioscan -Nfncdisk (or)
ioscan -fnCdisk
Disk turns out to be disk 5
To pull back disk288 from a guest on:
rmsf -k -H <hardware path from ioscan -NfnCdisk>
I recently encountered a volume group that was part legacy devices, part dsf agile.
root@protterdbsvr1:/root/shuffle> vgdisplay -v vgprotter
--- Volume groups ---
VG Name /dev/vgprotter
VG Write Access read/write
VG Status available
Max LV 2047
Cur LV 15
Open LV 15
Cur Snapshot LV 0
Max PV 2048
Cur PV 11
Act PV 11
Max PE per PV 65536
VGDA 22
PE Size (Mbytes) 16
Unshare unit size (Kbytes) 1024
Total PE 48004
Alloc PE 46765
Current pre-allocated PE 0
Free PE 1239
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 1t
VG Max Extents 65536
Cur Snapshot Capacity 0p
Max Snapshot Capacity 1t
--- Logical volumes ---
LV Name /dev/vgprotter/sqlbin
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqladmin
LV Status available/syncd
LV Size (Mbytes) 4000
Current LE 250
Allocated PE 250
Used PV 1
LV Name /dev/vgprotter/sqlctrl1
LV Status available/syncd
LV Size (Mbytes) 512
Current LE 32
Allocated PE 32
Used PV 1
LV Name /dev/vgprotter/sqlctrl2
LV Status available/syncd
LV Size (Mbytes) 512
Current LE 32
Allocated PE 32
Used PV 1
LV Name /dev/vgprotter/sqldata1
LV Status available/syncd
LV Size (Mbytes) 550000
Current LE 34375
Allocated PE 34375
Used PV 10
LV Name /dev/vgprotter/sqldiag
LV Status available/syncd
LV Size (Mbytes) 4000
Current LE 250
Allocated PE 250
Used PV 1
LV Name /dev/vgprotter/sqlexport
LV Status available/syncd
LV Size (Mbytes) 70000
Current LE 4375
Allocated PE 4375
Used PV 5
LV Name /dev/vgprotter/sqlindex1
LV Status available/syncd
LV Size (Mbytes) 30000
Current LE 1875
Allocated PE 1875
Used PV 1
LV Name /dev/vgprotter/sqlredo1
LV Status available/syncd
LV Size (Mbytes) 10000
Current LE 625
Allocated PE 625
Used PV 1
LV Name /dev/vgprotter/sqlredo2
LV Status available/syncd
LV Size (Mbytes) 10000
Current LE 625
Allocated PE 625
Used PV 1
LV Name /dev/vgprotter/sqlsystem
LV Status available/syncd
LV Size (Mbytes) 7200
Current LE 450
Allocated PE 450
Used PV 1
LV Name /dev/vgprotter/sqltemp
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqltools
LV Status available/syncd
LV Size (Mbytes) 1008
Current LE 63
Allocated PE 63
Used PV 1
LV Name /dev/vgprotter/sqlundo
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqlusers
LV Status available/syncd
LV Size (Mbytes) 1008
Current LE 63
Allocated PE 63
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c3t1d6
PV Name /dev/dsk/c5t1d6 Alternate Link
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/dsk/c3t1d7
PV Name /dev/dsk/c5t1d7 Alternate Link
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/dsk/c3t2d0
PV Name /dev/dsk/c5t2d0 Alternate Link
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/dsk/c3t2d1
PV Name /dev/dsk/c5t2d1 Alternate Link
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/dsk/c3t2d2
PV Name /dev/dsk/c5t2d2 Alternate Link
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk220
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk221
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk222
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk223
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk224
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk225
PV Status available
Total PE 4364
Free PE 1239
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On The issue is that two different technologies, alternate link and native multipathing are in use in the same volume group. The volume group was working but might have maintenance issues down the road. Plus I personally found it darn confusing. PV Name /dev/dsk/c3t2d2
PV Name /dev/dsk/c5t2d2 Alternate Link
The fix:
vgextend vgprotter /dev/disk/disk53
echo $?
0
vgreduce vgprotter /dev/dsk/c3t2d2
echo $?
0
vgreduce vgprotter /dev/dsk/c5t2d2
echo $?
0
We check the return code to make sure the operation was a success. Repeat for each legacy device.
No downtime though for prodcution systems I recommend working with a change request under your organizations policy.
What id looks like after we are done:
root@
protterdbsvr1:/root> vgdisplay -v vgprotter
--- Volume groups ---
VG Name /dev/vgprotter
VG Write Access read/write
VG Status available
Max LV 2047
Cur LV 15
Open LV 15
Cur Snapshot LV 0
Max PV 2048
Cur PV 11
Act PV 11
Max PE per PV 65536
VGDA 22
PE Size (Mbytes) 16
Unshare unit size (Kbytes) 1024
Total PE 48004
Alloc PE 46765
Current pre-allocated PE 0
Free PE 1239
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2
VG Max Size 1t
VG Max Extents 65536
Cur Snapshot Capacity 0p
Max Snapshot Capacity 1t
--- Logical volumes ---
LV Name /dev/vgprotter/sqlbin
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqladmin
LV Status available/syncd
LV Size (Mbytes) 4000
Current LE 250
Allocated PE 250
Used PV 1
LV Name /dev/vgprotter/sqlctrl1
LV Status available/syncd
LV Size (Mbytes) 512
Current LE 32
Allocated PE 32
Used PV 1
LV Name /dev/vgprotter/sqlctrl2
LV Status available/syncd
LV Size (Mbytes) 512
Current LE 32
Allocated PE 32
Used PV 1
LV Name /dev/vgprotter/sqldata1
LV Status available/syncd
LV Size (Mbytes) 550000
Current LE 34375
Allocated PE 34375
Used PV 10
LV Name /dev/vgprotter/sqldiag
LV Status available/syncd
LV Size (Mbytes) 4000
Current LE 250
Allocated PE 250
Used PV 1
LV Name /dev/vgprotter/sqlexport
LV Status available/syncd
LV Size (Mbytes) 70000
Current LE 4375
Allocated PE 4375
Used PV 5
LV Name /dev/vgprotter/sqlindex1
LV Status available/syncd
LV Size (Mbytes) 30000
Current LE 1875
Allocated PE 1875
Used PV 1
LV Name /dev/vgprotter/sqlredo1
LV Status available/syncd
LV Size (Mbytes) 10000
Current LE 625
Allocated PE 625
Used PV 1
LV Name /dev/vgprotter/sqlredo2
LV Status available/syncd
LV Size (Mbytes) 10000
Current LE 625
Allocated PE 625
Used PV 1
LV Name /dev/vgprotter/sqlsystem
LV Status available/syncd
LV Size (Mbytes) 7200
Current LE 450
Allocated PE 450
Used PV 1
LV Name /dev/vgprotter/sqltemp
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqltools
LV Status available/syncd
LV Size (Mbytes) 1008
Current LE 63
Allocated PE 63
Used PV 1
LV Name /dev/vgprotter/sqlundo
LV Status available/syncd
LV Size (Mbytes) 20000
Current LE 1250
Allocated PE 1250
Used PV 1
LV Name /dev/vgprotter/sqlusers
LV Status available/syncd
LV Size (Mbytes) 1008
Current LE 63
Allocated PE 63
Used PV 1
--- Physical volumes ---
PV Name /dev/disk/disk220
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk221
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk222
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk223
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk224
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk225
PV Status available
Total PE 4364
Free PE 1239
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk57
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk56
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk55
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk54
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk53
PV Status available
Total PE 4364
Free PE 0
Current pre-allocated PE 0
Autoswitch On
Proactive Polling On
Tags: agile dsr, how to fix mixed agile/legacy volume group with no dowtime, hpux, legacy device, LVM, volume group
This is an improvement to fixing the problems if you do blow things up. Click here to see.
Here is the thing. VXvM is messed up on HP-UX. The mirror break command is broken on 11.23 and 11.31.
That being said depending on how you use it, you can have a mess to clean up or not.
Scenario:
…
Note the disks are supposedly failing. Easy fix, though I can’t say how long this will last.
Now we look at them.
Now they are fixed.
Now to the heart of the matter. Lets say you want to break c2t0d0 out of the mirror and say make a drd image. The man page and HP support says you can use this form.
If you use that form on many HP-UX systems the mirror break will fail and you will have a mess to clean up. If you want to prove your skills go ahead and use that form and click the link above to find the fix.
If you would rather look smart and say cruise the Internet, do this form.
You get the following UGLY results.
However the only thing that actually goes wrong is removing the disk rootdisk02 from the roodg.
Easily fixed with a single command.
vxdg -g rootdg rmdisk rootdisk02
vxdisk list shows:
A healthy ready for DRD cloning rootdg
Tags: how to break up a vxvm mirror without blowing up your rootdg, mirror break, vxvm, vxvm mirror break