A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Multipathing & ZFS



 
 
Thread Tools Display Modes
  #1  
Old December 17th 06, 09:40 PM posted to comp.arch.storage
ChrisS
external usenet poster
 
Posts: 4
Default Multipathing & ZFS

I hope someone can answer this for me!

Does stmsboot or mpathadm (Solaris 10 U3) play nice with ZFS for good
failover HA.

I'm currently using Sol. 10 6/06 (u2) with some zpools and stmsboot -e,
but my devices don't seem to change when I'm looking at the format
output

On one of my 15K test domains, I'm seeing my two LUNS via my two Sun
Qlogic FC cards (sg-xpci2fc-qf2), which go to my two SAN switches (1 FC
per card) and with two SAN Zones for the two ports on my Hitachi
AMS1000. Each SAN Zone has only one of my two HBAs in them and both
Hitachi HBA ports are in each one of those SAN Zones. Hopefully to
failover.

I've activated stmsboot -e and rebooted, but I still see all the
individual devices per my two controllers and pathes that I saw prior
to the reboot. Is this normal?

So I still see this:

# echo | format
0. c0t0d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
/pci@dc,700000/pci@1/scsi@2/sd@0,0
1. c0t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
/pci@dc,700000/pci@1/scsi@2/sd@1,0
2. c1t50060E80100293E2d0 HITACHI-DF600F-0000-500.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,0
3. c1t50060E80100293E2d1 HITACHI-DF600F-0000-250.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,1
4. c1t50060E80100293E6d0 HITACHI-DF600F-0000-500.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,0
5. c1t50060E80100293E6d1 HITACHI-DF600F-0000-250.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,1
6. c3t50060E80100293E2d0 HITACHI-DF600F-0000-500.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,0
7. c3t50060E80100293E2d1 HITACHI-DF600F-0000-250.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,1
8. c3t50060E80100293E6d0 HITACHI-DF600F-0000-500.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,0
9. c3t50060E80100293E6d1 HITACHI-DF600F-0000-250.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,1

Being a n00b and being in a test environment, I pulled one of the
Fibres on my E15K test domain while it was writing some test data to
one of ZFS file systems. BOOM! No fail-over, just a CPU Panic via
ZFS, and a crash, reboot.

Any ideas? I'm trying to stay away from Hitachi's HDLM fail-over
software, due even more complexity and not knowing if it too plays nice
with ZFS, manual doesn't really indicate anything like that.

  #2  
Old December 18th 06, 03:44 AM posted to comp.arch.storage
Bill Todd
external usenet poster
 
Posts: 162
Default Multipathing & ZFS

ChrisS wrote:
I hope someone can answer this for me!

Does stmsboot or mpathadm (Solaris 10 U3) play nice with ZFS for good
failover HA.

I'm currently using Sol. 10 6/06 (u2) with some zpools and stmsboot -e,
but my devices don't seem to change when I'm looking at the format
output


The short answer is that 6/06 Solaris 10 ZFS doesn't support failover.
The ZFS FAQ (available along with other useful information - especially
the forums - at http://www.opensolaris.org/os/community/zfs/ ) states
that SunCluster V3.2 supporting HA ZFS failover should be available by
about now.

IIRC use of failover with the 6/06 Solaris release has been discussed in
the forum, but it takes some tweaking.

- bill
  #3  
Old December 18th 06, 04:10 AM posted to comp.arch.storage
Bill Todd
external usenet poster
 
Posts: 162
Default Multipathing & ZFS

Bill Todd wrote:
ChrisS wrote:
I hope someone can answer this for me!

Does stmsboot or mpathadm (Solaris 10 U3) play nice with ZFS for good
failover HA.

I'm currently using Sol. 10 6/06 (u2) with some zpools and stmsboot -e,
but my devices don't seem to change when I'm looking at the format
output


The short answer is that 6/06 Solaris 10 ZFS doesn't support failover.


Whoops - I should have read more carefully (especially your post
title...), since you appear to be referring to failing over a connection
rather than node-to-partner. I suspect that ZFS currently does nothing
to facilitate this, nor to hinder it if lower-level facilities perform
it perfectly transparently (which they don't seem to be in your case).
The ZFS forum that I mentioned would be a good place to discuss this.

- bill
  #4  
Old December 18th 06, 11:52 AM posted to comp.arch.storage
John S.
external usenet poster
 
Posts: 2
Default Multipathing & ZFS



On Dec 17, 9:40 pm, "ChrisS" wrote:
I hope someone can answer this for me!

Does stmsboot or mpathadm (Solaris 10 U3) play nice with ZFS for good
failover HA.

I'm currently using Sol. 10 6/06 (u2) with some zpools and stmsboot -e,
but my devices don't seem to change when I'm looking at the format
output

On one of my 15K test domains, I'm seeing my two LUNS via my two Sun
Qlogic FC cards (sg-xpci2fc-qf2), which go to my two SAN switches (1 FC
per card) and with two SAN Zones for the two ports on my Hitachi
AMS1000. Each SAN Zone has only one of my two HBAs in them and both
Hitachi HBA ports are in each one of those SAN Zones. Hopefully to
failover.

I've activated stmsboot -e and rebooted, but I still see all the
individual devices per my two controllers and pathes that I saw prior
to the reboot. Is this normal?

So I still see this:

# echo | format
0. c0t0d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
/pci@dc,700000/pci@1/scsi@2/sd@0,0
1. c0t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
/pci@dc,700000/pci@1/scsi@2/sd@1,0
2. c1t50060E80100293E2d0 HITACHI-DF600F-0000-500.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,0
3. c1t50060E80100293E2d1 HITACHI-DF600F-0000-250.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,1
4. c1t50060E80100293E6d0 HITACHI-DF600F-0000-500.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,0
5. c1t50060E80100293E6d1 HITACHI-DF600F-0000-250.00GB
/pci@fd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,1
6. c3t50060E80100293E2d0 HITACHI-DF600F-0000-500.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,0
7. c3t50060E80100293E2d1 HITACHI-DF600F-0000-250.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e2,1
8. c3t50060E80100293E6d0 HITACHI-DF600F-0000-500.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,0
9. c3t50060E80100293E6d1 HITACHI-DF600F-0000-250.00GB
/pci@dd,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80100293e6,1

Being a n00b and being in a test environment, I pulled one of the
Fibres on my E15K test domain while it was writing some test data to
one of ZFS file systems. BOOM! No fail-over, just a CPU Panic via
ZFS, and a crash, reboot.

Any ideas? I'm trying to stay away from Hitachi's HDLM fail-over
software, due even more complexity and not knowing if it too plays nice
with ZFS, manual doesn't really indicate anything like that.


I'm guessing you need to add an entry into the
/kernel/drv/scsi_vhci.conf file. Look towards the bottom of the file
and you'll see it talking about adding 3rd party arrays.

When you do get it working your disk names will be something like this:
c2t600A0B8000267BDA000005FF45790A7Ad0

They are longer than a WWN than you have now...

We have been using STMS (or MPXIO... or what ever it's called this
week.. LOL) with ZFS for a month or two now... seems to be working just
dandy...

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 08:09 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.