msgbartop
Tips and Tricks site for advanced HP-UX Engineers
msgbarbottom

23 Jul 15 HP-UX patch depot tutorial

Patch Depot tutorial

So you downloaded your QPK after doing swainv analysis of what patches your HP-UX server fleet needs. You have a bundle, but you do not want the default name bundle. No problem use the create script to customize:

The following command sets the bundle name (-b ) and the tilte of the bundle ( -t )
./create_depot_hpux.11.31 -b 201510HPUXPATCHMYCOMPANY -t MYCOMPANYFALL2015
… some output

A directory depot is created
cd depot

Check the bundle list. The names should be meaningful
swlist –l bundle –s $PWD
# Initializing…
# Contacting target “myhost”…
#
# Target: myhost:/Depots/tmp/depot
#

201510HPUXPATCHMYCOMPANY B.2015.07.23 MYCOMPANYFALL2015
QPKAPPS B.11.31.1503.411a Applications Patches for HP-UX 11i v3, March 2015
QPKBASE B.11.31.1503.411a Base Quality Pack Bundle for HP-UX 11i v3, March 2015

Next step copy them to a single install point:
swcopy -x enforce_dependencies=FALSE -x reinstall=TRUE -x write_remote_files=TRUE -d -s $PWD \* @ /Depots/B.11.31/2015FY_second

… some output deleted …

PHSS_43882.HVSD-KRN,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
PHSS_43883.HSSN-KRN,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
PHSS_43884.IGSSN-KRN,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
PHSS_43886.GVSD-KRN,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
* Selection succeeded.

* Beginning Analysis and Execution
* Session selections have been saved in the file
“/root/.sw/sessions/swcopy.last”.
WARNING: “myhost:/Depots/B.11.31/2015FY_second”: The software
dependencies for 4 products or filesets cannot be resolved.
* The execution phase succeeded for
“myhost:/Depots/B.11.31/2015FY_second”.
* Analysis and Execution succeeded.

NOTE: More information may be found in the agent logfile using the
command “swjob -a log myhost-2347 @
myhost:/Depots/B.11.31/2015FY_second”.

======= 07/23/15 14:49:03 PDT END swcopy SESSION (non-interactive)
(jobid=myhost-2347)

Next check the destination depot
swlist –l depot –s /Depots/B.11.31/2015FY_second

# Initializing…
# Contacting target “myhost”…
#
# Target: myhost:/Depots/B.11.31/2015FY_second
#

201510HPUXPATCHMYCOMPANY B.2015.07.23 MYCOMPANYFALL2015
QPKAPPS B.11.31.1503.411a Applications Patches for HP-UX 11i v3, March 2015
QPKBASE B.11.31.1503.411a Base Quality Pack Bundle for HP-UX 11i v3, March 2015

Then a problem comes up and you need to add a new patch to your already build depot, say PHSS_44116.depot a fix to issues starting hpvm guests.
swcopy -d -s $PWD/PHSS_44116.depot \* @ /Depots/B.11.31/2015FY_second

======= 07/23/15 14:58:12 PDT BEGIN swcopy SESSION (non-interactive)
(jobid=myhost-2348)

* Session started for user “root@myhost”.

* Beginning Selection
* Target connection succeeded for
“myhost:/Depots/B.11.31/2015FY_second”.
* Source: /Depots/B.11.31/PHSS_44116.depot
* Targets: myhost:/Depots/B.11.31/2015FY_second
* Software selections:
PHSS_44116.HPVM-CORE,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
PHSS_44116.VIRT-PROVIDER,r=1.0,a=HP-UX_B.11.31_IA,v=HP,fr=1.0,fa=HP-UX_B.11.31_IA
* Selection succeeded.

* Beginning Analysis and Execution
* Session selections have been saved in the file
“/root/.sw/sessions/swcopy.last”.
* The analysis phase succeeded for
“myhost:/Depots/B.11.31/2015FY_second”.
* The execution phase succeeded for
“myhost:/Depots/B.11.31/2015FY_second”.
* Analysis and Execution succeeded.

NOTE: More information may be found in the agent logfile using the
command “swjob -a log myhost-2348 @
myhost:/Depots/B.11.31/2015FY_second”.

======= 07/23/15 14:58:14 PDT END swcopy SESSION (non-interactive)
(jobid=myhost-2348)

This method can be used to deliver any software built in depot format.
Let’s check the patch became a part of the depot:
myhost:root > swlist -s /Depots/B.11.31/2015FY_second
# Initializing…
# Contacting target “myhost”…
#
# Target: myhost:/Depots/B.11.31/2015FY_second
#

#
# Bundle(s):
#

201510HPUXPATCHMYCOMPANY B.2015.07.23 MYCOMPANYFALL2015
QPKAPPS B.11.31.1503.411a Applications Patches for HP-UX 11i v3, March 2015
QPKBASE B.11.31.1503.411a Base Quality Pack Bundle for HP-UX 11i v3, March 2015
#
# Product(s) not contained in a Bundle:
#

PHSS_44116 1.0 HPVM B.06.30 CORE PATCH
myhost:root >

Tags: , , ,

03 Mar 15 scopeux and midaemon don’t want to run

midaemon and scopeux combine to collect performance data on HP-UX.

They both need to be running to properly collect data.

These are part of a depot called measureware which is part of the base OS.

To see if it is installed:
swlist -l bundle TC097EA
myserv0:root > swlist -l bundle TC097EA
# Initializing…
# Contacting target “myserv0″…
#
# Target: myserv0:/
#

TC097EA 11.20.000 HP Operations Agent

If not installed, HP Operations Agent can be downloaded from HP if you have a software contract with HP.

It is also delivered as part of openview, which is a separately licensed product.

I recently implemented performance data collection on a fleet of 100+ servers where I work.

On three of the servers, the daemons refused to run normally.

The following error was recorded in the file /var/opt/perf/status.mi
Unable to find newly enabled CPU.
Please use -prealloc to allocate bufsets for all CPUs.

Here are the steps to implement.
mwa stop all
/opt/perf/bin/ovpa stop
/opt/perf/bin/pctl stop
perfstat

kill any processes gently identified as running in perfstat output.

Edit the file /etc/rc.config.d/ovpa
MIPARMS=”-prealloc=2 -pids 10000 -kths 10000 -smdvss 512M”
export MIPARMS

2 is the number of physical cpus in the box.
If present the file /var/opt/perf/datafiles/RUN should be deleted.


mwa start all
perfstat

Check back in 1 hour and one day that midaemon and scopeux are still running.
Check /var/opt/perf/datafiles for updated log files.

Tags: , , , ,

06 Aug 14 Custom naming your bi-annual HP-UX patch sets

Having a name associated with your bi-annual patch file makes it easier to inventory than the default BUNDLE

This is based on doing a QPK download which requires a support agreement. Output is from 11.23 it worked with 11.31 as well.

./create_depot_hpux.11.23 -b”201407HPUXPATCHMINE” -t 201407HPUXPATCHMINE

< .. lots of boring output >

# DEST -s the destination of the patch set.

cd depot

swcopy -x enforce_dependencies=false -s $PWD \* @ $DEST

< .. lots of boring output >

mygush0:root > swlist -l bundle -s $DEST
# Initializing…
# Contacting target “mygush0″…
#
# Target:  mygush0:/Depots/B.11.23/2014midyear_depot
#

201407HPUXPATCHMINE         B.2014.08.06   201407HPUXPATCHMINE
DNSUPGRADE                    C.9.3.2.13.0   BIND UPGRADE
FEATURE11i                    B.11.23.1009.083 Feature Enablement Patches for HP-UX 11i v2, September 2010
HPSIM-HP-UX                   C.07.03.00.00.03 HP Systems Insight Manager Server Bundle
HWEnable11i                   B.11.23.1012.085a Hardware Enablement Patches for HP-UX 11i v2, October 2010
JAVAOOB                       2.05.00        Java2 Out-of-box for HP-UX
NodeHostNameXpnd              B.11.23.01     Nodename, Hostname expansion enhancement
OpenSSL                       A.00.09.08za.002 Secure Network Communications Protocol
QPKAPPS                       B.11.23.1012.086a Applications Patches for HP-UX 11i v2, December 2010
QPKBASE                       B.11.23.1012.086a Base Quality Pack Bundle for HP-UX 11i v2, December 2010

More fun

Tags: , ,

21 Jun 12 When swinstall will not install: What to check

I’ve just been through another frustrating battle with swinstall and wanted a complete what to check list in the event that it won’t install software:

  1. Check that system ip address (ifconfig lan#) is the same as defined in /etc/hosts . If this is not consistent, swinstall will not work and the error message is far from meaningful.
  2. Check that /etc/nsswitch.conf exists. After a clean install it does not exist and needs to be put in place.
  3. Check that nfs is working correctly for nfs based install. Bounce nfs.client,nfs.server,nfs.core in that order to stop reverse order to start ex /sbin/init.d/nfs.core <start/stop>
  4. Use showmount -e <remotehost> to insure connectivity to remote depots.
  5. swlist -l depot -s <remote host depot>
  6. swreg -d depot $PWD on remote host after cd to depot. Remember in many scenarios remote depots in tape format will not install.
  7. /usr/sbin/swagentd -r (Should be taken after any of the above corrective steps).

Tags: , ,

04 Jun 12 Cleaning up an hp-ux depot

This post was a long time coming. It was posted to a powerpoint presentation years ago. Nine years ago to be exact.

–My depot is too big and contains patches that are superseded a few times, what to do?
–cleanup –p –d <depot.name> # preview
cleanup –d  <depot.name>
When you run it:
Removing superseded 11.X patches from depot: /depot/PATCH …done.
The superseded 11.X patches have been removed from the depot:
/depot/PATCH.
All information has been logged to /var/adm/cleanup.log.
### Cleanup program completed at 06/04/12  11:32:38

12 Jul 11 Migrate VXVM booted system to LVM

From the HP-UX Veritas Administration guide, buried on page 106

This example shows how to create an LVM root disk on physical disk c0t1d0
after removing the existing LVM root disk configuration from that disk.

BOOTBG=$(vxdg bootdg)

vxprint -htg $BOOTDG | grep ^dm

dm rootdisk01   disk233_p2   auto     1024     142450592 –
dm rootmirr     disk234_p2   auto     1024     142450592 –

# You get the boot disk from this command. Break off the s2 if you are using legacy devices you can use them or the agile SDF devices.

# You may need to use vxbrk_mirror to break the mirror. Make sure you know which disk you are booted from. Check syslog to be sure. setboot is not a good indicator.

# Due to a wordpress error I’ve been forced to take the path etc vx bin out of the commands. I will fix this when wordpress stops blowing chunks on this data. Where there are spaces there need to be slashes.
#  etc vx bin vxdestroy_lvmroot -v c0t1d0
# etc vx bin vxres_lvmroot -v -b c0t1d0
The -b option to vxres_lvmroot sets c0t1d0 as the primary boot device.
As these operations can take some time, the verbose option, -v, is specified to
indicate how far the operation has progressed.

This command takes care of setboot and all details. Then just boot from the console.

This procedure does not remove VxVM software. The daemon still runs. But your system now boots LVM and that makes using Dynamic Root Disk (DRD) much easier.

 

Tags: , , , , , ,

11 Apr 11 swlist check the state of patches

swlist -l fileset -a state | grep -v config | sed ‘/^#/d’

 

Output looks like this:
PHCO_36551.CORE2-64SLIB               transient
PHCO_36551.CORE2-SHLIBS               transient

Look for stuff that is in state installed instead of configured.

swconfig \* or swconfig PHCO_36551 may fix the issue.

Tags: , ,

18 May 10 swlist command to provide install date

New trick learned from HP support backline engineer.

swlist -l fileset -a revision -a title -a state -a install_date

———Sample output ——
# vmGuestLib B.04.00 Integrity VM vmGuestLib 200903081306.51
vmGuestLib.GUEST-LIB B.04.00 Integrity VM GUEST-LIB fileset 200903081306.51 configured
# vmProvider B.04.00 WBEM Provider for Integrity VM vmProvider 200903081306.59
vmProvider.VM-PROV-CORE B.04.00 WBEM Provider for Integrity VM VM-PROV-CORE 200903081306.59 configured

Tags: , , , , , , , ,

30 Sep 09 SD-UX Locked. Diagnostic steps.

Problem: After being Ignited superman lost most sd-ux functionality.

Note: superman (not its real name) is a vpar running on a superdome complex.  Only swlist works, swreg -l depot, swinstall -i, swverify all fail with the same error.

 

 

ERROR:   “spuerman/”:  You do not have the required permissions to
         select this target.  Check permissions using the “swacl”
         command or see your system administrator for assistance.  Or,
         to manage applications designed and packaged for nonprivileged
         mode, see the “run_as_superuser” option in the “sd” man page.
       * Target connection failed for “zrtph0v0:/”.
ERROR:   More information may be found in the daemon logfile on this
         target (default location is
         superman:/var/adm/sw/swagentd.log).
       * Selection had errors.

Standard techniques say check:

/sbin/init.d/swagentd stop

/sbin/init.d/swagentd start

Check /etc/hosts networking is consistent.

Make sure /etc/nsswitch.conf is present and makes sense.

Check permissions on /var/tmp and all the swagent files.

None of this worked.

swlist -i -s $PWD in a depot generated the following error taken from ITRC because the system is already fixed.:

swacl -l host @ superman

 

 

List swacl generates this:

Util_Random internal error:  Read of /dev/urandom failed, rv=-1, size=8, No such device (19).

There were a series of other errors all pointing to /dev/urandom

lsdev showed that /urandom did not load the kernel module rng (Randome Number Generator).
Detail    root      /usr/sam/tui/kc/modulemod.sh rng
Detail    root      /usr/sbin/kcmodule -a -P ALL

This is normal output. Before the system was fixed the system did not show the module running.

lsdev | grep rng

138          -1         rng             pseudo

Fix was to unload the rng module in the kernel (using sam SEP cheats)
Then we loaded it. In spite of being listed as dynamic a reboot was required to restore sd-ux functionality.

Actual source of the problem: Ignite image of supergirl did not exclude the /dev/ “files” This cause the wrong kernel module to be loaded with the /dev/urandom “file” driver. Normally this is not a problem becuase /dev is crecreated but for some reason /dev/udandom was not loading the kernel module rng

Ignite excludes have been updated to exclude these files and the system will be re-ignited to make sure nothing else bad happens.

Tags: , , , , , , ,

09 Sep 09 Case Study: Capacity & Migration planning for a small organization

This is our first case study. The events leading up to it occur between 1998 and 2002. It is a real life case study based on my experience. For legal reasons, I can not identify the organization. It is a charity that raises now around $100 million, 92% of funds raised go to actual charitable work. 8% is overhead. IT infrastructure is overhead, even though it is critical to actually raising funds.

From 1991-2005 I worked at this charity in IT, first as a programmer analyst, then as a dba, finally becoming the backup Unix Admin in 1998 and the full time Unix Admin in 2000. The organization ran its legacy fund raising systems on a pair of D class HP-UX systems. The back end database was Software AG adabas. The user fund raising community wanted to have an sql like ability to look into the database and run queries. they wanted flexible use of strategic data. An attempt was made in early 1997 to install a sql front end, but it did not provide acceptable results.

An internal study was done and it was decided in late 1997 to migrate legacy systems to a web based front end, with Oracle as the back end database, Oracle Application Server using forms and reports to build applications. Initially no plan was made to migrate to stronger hardware, due to the assurance from Oracle that their software would run on the existing infrastructure.

By 2000 it was obvious that this was not true. Though the database server itself ran acceptably, there was not sufficient memory or disk capacity to run the application server. So I was asked to prepare a plan to migrate legacy systems. Here were the guidelines:

  • To run three environments, to be described below, each with a database server, an application server and forms and reports development tools on them.
  • Sandbox was to be used to test OS patches, Oracle patches, and tools upgrades. It was to belong to the systems administrator who was permitted to restart this system on short or no notice.
  • The development environment was to be where the developers were to develop code. It needed to be stable and available 100% during normal development hours 8 a.m. to 6 p.m. Any changes made to his system were first to be vetted on the sandbox system.
  • The production system had the same uptime requirements accept that all changes needed to be vetted first on the other two systems.
  • The hardware was to be the same model for all the systems. This was defined to avoid hardware surprises. Only the production system needed to be at full capacity. the other systems were to be the same to permit realistic load testing.
  • Databases would be hosted on SAN disk with an HBA fiber channel connection. Systems were to boot locally.

Overall, I thought this was a solid foundation. Some of the points were made by management, some were suggested by me.

The following basic technical requirements were developed:

  • Overall database needed to be approximately 5 GB for server. Actual use hit 15 GB by 2005. This growth factor was planned.
  • Oracle Server, one instance had to run on each server.
  • Oracle Application server one instance had to run on each server.
  • Legacy applications Natural/Software AG Adabas needed to run on each server.
  • Server configuration needs to be manged and tracked responsibly.
  • HP-UX bi-annual updates needed to be installed in a timely basis after quality assurance.
  • The replacement cycle on hardware would be 3-5 years to maintain cost savings provided by being under warranty (First three years)

Deployment Diagram

Server Deployment

Other Relevant facts on the decision making process.

  • HP Hardware and Software agreements were running over $30,000 per year on existing infrastructure.
  • Much of the cost was hardware support due to the age and near obsolescence of the hardware.
  • Significant savings could be obtained by using current hardware that was under warranty.
  • Systems would be configured and used to provide a disaster recovery solution.

Three vendors were picked to provide proposals. All ended up recommending HP-9000 L2000(later renamed rp5450) servers. Here are the highlights:

  • rp5450 systems with 2 GB system memory.
  • 146 GB dual disks to server as boot disks with software mirroring.
  • 2 CPU would be installed per server.
  • Memory capacity and purchase was planned to enable an upgrade to 8 GB without replacing exiting memory.
  • Two HBA Fiber channel cards provided per machine to provide redundancy and fail over.
  • A capital budget request was made showing that support cost savings would over the course of 4 years, completely recover the cost of the systems.
  • Systems would each have a Ultirum tape drive, for locally provided backups and Ignite-UX make_tape_recovery backups as part of the DR plan.
  • Systems had two Gigabit Network Interface cards.
  • Systems would have a private network for use in Ignite backup, recovery and system replication.
  • Systems were to be delivered with HP-UX 11.11
  • HP provided RAC and UPS and PDU were specified.

How it went:

  • Systems were delivered in May of 2002.
  • Initial OS install began immediately. Systems were initially delivered with HP-UX 11.00. We delayed start of installation until correct media was provided.
  • All three systems were installed with a base OS to insure that hardware was working.
  • OS patch requirements for Oracle, security and bi-annual updates were installed on the sand box. It was decided that Ignite Golden Image would be used to replicate the sand box configuration, once a stable configuration was found.
  • Significant problems were encountered with the Oracle and Oracle Application Server installations. The version was changed twice. Several major Oracle patch sets had to be installed to deal with “show stopper” bugs that were encountered.
  • After the September 11 attacks in New York City in 2001, a security review was conducted and the deployment plan was modified to include improved security. Several rounds of patching and tools testing occurred on the OS level.
  • In December of 2002, the application development team notified us that they were satisfied with the sandbox and asked that an Ignite image be made and transferred to the development system.
  • In January-February of 2003 Imaging was done and the system was replicated. There were OS problems with the Ignite replication that took several weeks to work out.
  • Several changes were requested by the development staff. They were tested on the sand box and then deployed on the development system.
  • An Ignite central server was built on the sand box to handle images which were shared on NFS and available for use after booting of the sandbox Ignite configuration.
  • In June of 2003 after several change cycles the configuration was approved for deployment.
  • Ignite replication was completed on the production environment using the sandbox, which had been frozen for this purpose as the image template.
  • In August of 2003 all legacy systems were cut over to the rp5450 systems. HR would be migrated 18 months later due to Integration issues.
  • In the early of 2004 due to performance and memory use issues all systems were upgraded to 8 GB of system RAM.
  • For the year 2004 there was no downtime in production systems during normal business hours.
  • Weekly Ignite tape backups were taken on all systems and network based backup to shared NFS was used as a secondary DR method.
  • In February of 2004 a DR test was run at the HP Performance center and we successfully migrated a sandbox image to an rp5470 server in the HP infrastructure. Legacy systems were tested and approved as functional.

Note: This document was designed entirely using the wordpress interface and a Linux system. The diagram was created with a free Linux alternative to visio called dia. The tool is in evaluation, and might be replaced. Still a pretty good start. Cost to produce this environment in licensing fees?: Zero dollars.

Tags: , , , , , , , ,

sidebarbottom
sidebartop
sidebarbottom
WhatsApp chat