Now that the port of the application has gone live, and the migration of the user community is basically in the hands of the Applications Team and management, my focus has changed to updating the disaster recovery plan for the site.
Firstly, and most obviously, with the implementation of the Itaniums, we needed new computers available off site. The company had realized that this would be necessary when they originally purchased the Itaniums over a year ago, and also purchased two additional BL860c blades to be installed at the disaster recovery site. The amusing thing was that they were still sitting at the primary site up until recently, where they wouldn't have been much use in a disaster.
So a month or so ago, I journeyed to the disaster recovery site with the blades, and installed them in the enclosure out there. And with a bit of finagling, we organized that they be cabled up and made available on the wide area network.
This meant I could configure them from the primary site, and also, once configured, I could set up a recurring job to copy configuration information to them to be used to recover in the event of a real disaster.
Configuring them was interesting, as this was the first time I had used Virtual Connection Manager (warning: 2.84 MB) to set up the LAN and FC links in a blade enclosure. The production ones had been configured by HP before I arrived on the scene.
I powered up the blades and installed OpenVMS on one using the virtual DVD feature of the iLO management software, intending to run this one a boot server, and run the other as a satellite. Once I had the machine up and running, I was struggling to understand why I couldn't access the network via any of the four visible NICs. Then I realized that while I had created the virtual connection profile for the machine, I hadn't assigned it.
On trying to assign it, I got an error telling me that the machine had to be powered off before this action was allowed. Damn. Shut down OpenVMS, power off blade, assign virtual connection profile to blade, power up blade, reboot OpenVMS.
Anyway, once past this, everything else was straight forward. The machines are configured in a two node cluster, running on their internal disk drives. As they have no shared storage until we perform a disaster recovery (or Deity forbid) a real recovery, one is configured as a satellite node. This also allows me to use its (the satellite's) internal drives as a dumping area for configuration information used in the recovery plan.
Posted at February 19, 2009 9:33 PMComments are closed