>Recently, we had the CE do the concurrent LIC upgrade to 18.104.22.168.
>Please keep in mind that we have dpo 1.2 on our servers (yes I know
>that's ancient, we have plans to migrate to the latest sdd). Our
>nodes have multiple fibers/switches/etc.
We've got a simple case -- multiple paths from servers to the Shark
via FC-AL but no switches. All hosts are AIX 4.3.3 at same revs, and
no SCSI for attachments anywhere. (We originally used SCSI but locking
was so much faster with FC-AL that it helped cut our HACMP failover
times...not to mention easier cabling and whatnot.)
Quote:>A few of our vpaths 'dissappeared', much to my consternation.
>Fortunately (for me), they all reappeared after a reboot. But when
>you have 40 nodes, reboots bite.
Quote:>Now, my question is: has anyone done the conversion from dpo to sdd?
>How was that? All your hdisks mapped successfully to vpaths?
Just did that... process we used was:
1. Draft step by step plan and test on test boxes, finalize
2. Schedule downtime and arrange for CE to come on-site
3. On day of maint, CE started work -- total of 7 hours for the LIC upgrade
and CE would not need to start disabling paths until about 3 hours into it.
4. We upgraded OS on servers, upgraded apps, removed all 2105 hdisks/vpaths
and the dpo pseudodevice. (removal required to uninstall dpo!)
5. Uninstalled old v1.2 DPO driver
6. Installed new v1.3 SDD driver
7. Rebooted each server, one by one.
8. After they came up, hopped on, did 'lspv', 'lsvpcfg', 'datapath query
device' to verify state and correct mapping.
9. Had to do 'hd2vp <ESS VG>' for each ESS VG hooked up to system. No big deal.
10. When CE was ready to take down an host bay, I disabled one path on each
server (as appropriate) corresponding to the host bay, then told CE to
proceed. After CE finished, I re-enabled, verified state, took down the
next set of paths for next HB, told CE to proceed, etc.
That was all that we had to do.
It works ok if you have less than about 4-6 servers... if above that,
I would have split it into two separate maints -- one to upgrade systems
first and one to upgrade ESS once all systems upgraded.
I combined the two in a single maint because these were the few hosts
that I had great difficulty in scheduling a downtime for (once a year
is about all I get to do it!). I had already done the other hosts prior.
Did everything in a specific order in order to satisfy each step's prereqs,
and tested it all beforehand. So actual work went well.
The actual LIC upgrade process was transparent to us. The big trick is to
make sure the system is at correct revs (OS, 2105 drivers, SDD, etc) beforehand
so that it matches up with the new LIC code and will offer you least problems
or surprises that way.
SDD docs hints that any future upgrades to the SDD driver will be able to
be done without needing a reboot, so that's very hopeful. It's just when
going from 1.2 to 1.3 when a reboot is mandatory for various reasons.
I didn't have any problems with a transitional setup of 1.2 drivers on some
servers and 1.3 drivers on other servers; I just made sure to have them all
at 1.3 _before_ doing the ESS LIC upgrade.
That allowed me to schedule maints to upgrade individual servers in groups,
when I could, rather than needing to do all in one big swoop.