If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
brocade FC switches, automatically switch config at bootup
Somewhat simplified I have this situation using brocade switches,
running 5.3.0d: Server A and storage A is connected to switch A. Server B and storage B is connected to switch B. Switch A and switch B are connected via an ISL, forming a fabric of two switches. Storage A is configured to replicate all data to storage B. Thus, storage B has an exact copy of all the LUNs provisioned by storage A. If I crash site A, and issue a failover command to storage B, server B can successfully access the LUNs on storage B. All is well. However, if site A comes up again without any manual intervention, for example a power failure has been fixed and everything boots up again automatically. Now, storage A will still consider itself the "primary" and provision LUNs to server A. At the same time, server B is the actual live server, and is using the replicated LUNs. From a storage perspective, storage A and storage B will no resync until I tell it to, so that's not a major concern. I just have to make sure storage B is replicating changes back to storage A, and not the other way around. My major concern is that I now have two copies of my LUNs, with a time difference corresponding to the downtime of site A. In more detail, the physical servers are running VMware, so I'm potentially looking at dozens if not hundreds of network entities now coming live in two versions. I have tried to solve this by creating a recovery zoning config, where storage A is isolated from all servers. Thus, no server can mount LUNs on storage A. When I crash site A, the surviving switch B is issued a cfgenable recovery_config command. I had hoped than when site A and switch A came back up, switch A would think "ok, I see when I boot there's another config enabled than what I had when I was last alive, I'll switch to that since the other switch is principal", assuming switch B will become principal since it's the only remaining switch in the fabric. When I try this, switch A will not merge with switch B when switch A boots up again. The fabric is segmented and the E-port claims a zoning conflict. I can manually enable the recovery config on switch A and bounce the E-port to rebuild the fabric, but ideally I'd like for switch A to automatically merge with switch B and switch to recovery config as soon as it boots. I have been looking at setting switch B as principal "fabricPrincipal" but doing a full scale crash, failover and failback is quite a lengthy process and I'd like to hear from you people what you think. Site A is the primary production site and site B is a recovery site. I will not try to make this an active active configuration. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Building SANs with Brocade Fabric Switches (Hardcover) £30.00 including shipping from UK | Matt K | Storage & Hardrives | 0 | March 12th 07 04:38 PM |
Cisco 5428-2 ISCSI Brocade switches SAN | Marcel Hoffmann | Storage & Hardrives | 0 | April 10th 05 09:03 AM |
brocade SAN switch question | zombie | Storage & Hardrives | 4 | August 16th 04 07:17 PM |
Brocade Switch | Bill Johnson | General Hardware | 0 | May 20th 04 05:54 PM |
Brocade switches SMI-S compliant? | Charles Morrall | Storage & Hardrives | 0 | February 4th 04 09:53 PM |