msgbartop
Tips and Tricks site for advanced HP-UX Engineers
msgbarbottom

04 Sep 09 Creating Logical Volumes and Filesystems

Quick and Dirty Example here.

In our last example, we created a volume group vg03. It had thee disk, we expanded it to 4 because we planned proper capacity.

Our volume group now consists of 4 disks.

We are asked to create an approximately 10 GB files system in this SAN based volume group.

vgdisplay /dev/vg03

vgdisplay -v /dev/vg03

< Insert vgdisplay example here>

HP vgdisplay documentation link (Note this tends to change. I can’t help it if HP breaks the links)

This will show an empty volume group as we have not created any logical volumes

pvdisplay /dev/dsk/c10d0t1

… repeat for other disks …

<Insert pvdisplay examples here>

HP pvdisplay document link

Make sure nothing is on them.

Turns out 10 GB will fit quite nicely on a single disk. Since this is a SAN based disk, we need not worry here about raid configuration. If you are hosting an oracle rdbms, you should make sure the SAN admin sets up data, index and rollback as raid 1 or raid 10 to insure good performance.

lvcreate /dev/vg03

# Creates an empty logical volume on vg03. Uses default naming.

You can also do it this way if you like names.

lvcreate /dev/vg03 -n mydata

lvextend -L 10240 /dev/vg03/mydata /dev/dsk/c10t0d1

# This command creates an approximately 1024 MB logical volume and defines the disk it goes on. Always define the disk. Don’t let LVM or SAM decide where your data is going to go. Plan in advance. Note that LVM for Linux which is a feature port and not a binary recompile does let you define size 10 GB or 10240 MB. Still waiting for that feature on LVM for HP-UX.

newfs -F vxfs -o largefiles /dev/vg03/rmydata

# Why largefiles? Databases are big and the default limit on a file size in a file system is 2 GB. That is too small. I almost always set up my file systems these days for largefiles unless the file system itself is less than 2 GB

# Create a mount point.

mkdir /mydata

# mount it.

mount /dev/vg03/mydata /mydata

# This does not set an optimal JFS logging and recovery options, but that is a different article

bdf

# See if its there and the right capacity.

Next article: Edit /etc/fstab and set permanent mount options.

NOTE: This article needs to be checked and have vgdisplay and pvdisplay and other examples inserted into it.

Tags: , , , , , , ,

04 Sep 09 LVM Volume Group Create. High Capacity VGlvm

Volume group creation, done right need only be done once to last a long time. A few simple steps can make it a process you do once and then enjoy the long term benefits.

Step one is a little homework. Take a reasonable estimate at how many physical volumes the volume group is going to contain. Why is this important? Because by default lvm allocates resources as if there will be 255 physical volumes. Most volume groups don’t see that many disks, and the overall capacity is impacted by the default. For this example, we will pick a small volume group that is never anticipated to exceed 10 physical volumes. We will set the maximum volumes to 25 to have a fair amount of additional capacity but to more efficiently allocate scarce resources.

Now th fun begins. We will create a volume group called vg03

Discover the new disks, important if LUNS have been presented to the system.

insf -C disk (may not be needed on HP-UX 11.31)

ioscan -fnC disk

ioscan shows three disks for this example.

/dev/rdsk/c10t0d1 /dev/rdsk/c10t0d2 /dev/rdsk/c10t0d3

cd /dev

mkdir vg03

mknod /dev/vg03/group c 64 0x030000

# We have created a device file for the volume group.

We need to pvcreate the disks, which lablels the disk for use by LVM

pvcreate /dev/rdsk/c10t0d1

pvcreate /dev/rdsk/c10t0d2

pvcreate /dev/rdsk/c10t0d3

vgcreate -p25 /dev/vg03 /dev/dsk/c10t0d1 /dev/dsk/c12t0d1 /dev/dsk/c10t0d3

# alternative vgcreate -e 65535 -s 16 /dev/vg10 /dev/dsk/c10t0d1 /dev/dsk/c12t0d1 /dev/dsk/c16t0d1 /dev/dsk/c17t0d1

The option -s lets us set a larger PE size which can also increase capacity.

Now inevitably someone is going to decide to add another disk to this volume group. It may be immediately or it may be down the road. We are prepared.

The SAN admin and project manager want to create a scratch area within the volume group for oracle backups to disk.

They present a new lun disk /dev/rdsk/c16t0d5

We respond like lightning.

insf -C disk

ioscan -fnC disk

pvcreate /dev/rdsk/c16t0d5

vgextend vg03 /dev/dsk/c16t0d5

The disk is ready for use.

Different article for how we set up logical volumes and a file system.

Tags: , , ,

04 Sep 09 Q4 Crash Dump Analysis to Analyze System Dump Files

What follows is a document I found on the forums. It can also be found on the docs.hp.com site, but this is a paraphrase, with some extra commentary.

1) You need to have foresight. Before you have a crash you must enable your system to save crash dumps.

2) vi /etc/rc.config.d/savecrash    — set the first parameter to 1. Now when your system crashes, and some day it probably will you can perform q4 analysis and send the results to HP. I think this document originated within HP. I have one written somewhere on the forums, but his one is better.

USING Q4 TO ANALYZE SYSTEM DUMP FILES

————————————-

When a 11.X HP-UX system crashes, it saves a snapshot of RAM in swap and during the reboot, copies it into /var/adm/crash. Because these files are binary, a utility called “q4” is used to analyze them and create readable text from which the response center can determine the failure cause.

============================ STEP 1 ===========================

Dumps are normally saved to /var/adm/crash.

Verify you have a dump to analyze by doing:

# ll /var/adm/crash/cr*

You may see:

/var/adm/crash/crash.0/INDEX

/var/adm/crash/crash.0/vmunix.gz

/var/adm/crash/crash.0/image.0.1.gz

/var/adm/crash/crash.0/image.0.2.gz

/var/adm/crash/crash.0/image.0.3.gz

/var/adm/crash/crash.0/image.0.4.gz

^ your suffix may vary

The INDEX file contains and the /etc/shutdownlog contains the “panic” statement.

============================ STEP 2 ===========================

The following commands must all be run from the dump directory:

  • cd to the dump directory ie: cd /var/adm/crash/crash.0

^^^^ ^

your dump dir.

  • # /usr/contrib/bin/gunzip vmunix.gz

(uncompresses the kernel file – may already be done)

  • # q4prep -p

(ignore the error if this was previously done)

  • Now type:

# q4 -p .

^ Notice this ‘dot’

This will put you at the q4 utility prompt: q4>

  • The next command will get you a “fingerprint” of what was going on on the system at the time of the failure.

  • If you are working with an HP RCE at this time, type the following line and read the results to him:

trace event 0

Otherwise, simply type this next line and continue.

trace event 0 > trace

  • At the prompt type: include analyze.pl

\_letter “el”

  • At the next prompt type: run Analyze AU >> ana.out

  • At the next prompt type: exit

============================ STEP 3===========================

Generate a patch list:

# swlist -l product PH\* > patch_list

Using the CALL ID as the subject, email patch_list, ana.out and possibly the trace file and what.out to : hpcu@atl.hp.com

NOTE: Max 3MB email size

To speed future calls of this nature, open a call with the Response Center and inform them that you will send email with the call ID as the subject. Then send the ana.out and patch_list file to the email address listed above.

Tags: , , ,

01 Oct 07 Welcome to hpux.ws

This site was created for two reasons. It was meant to document procedures, scripts and processes that I felt were important to myself and the greater HP-UX community.

This project is owned by ISN Corporation, a Subchapter S Corporation based in Chicago, Illinois, USA.

If you want to contribute to the site, let me know. hpuxadmin in gtalk hpuxconsulting in yahoo messenger

Regards,

Steven “Shmuel” Protter: Rosh Tzurim, Israel

Tags: , , ,

WhatsApp chat