A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

EMC Clariion LUNs



 
 
Thread Tools Display Modes
  #1  
Old April 15th 05, 09:52 AM
bj
external usenet poster
 
Posts: n/a
Default EMC Clariion LUNs

My company has purchased and EMC Clariion CX300 array and we're
currently deciding how best to carve up the LUNS. It will be in a dual
switched fabric environment with two hosts both having dual HBA's, one
into each fabric.

The CX300 has two StorageProcessors each with two fiber ports. A single
LUN can only be owned by one SP which means that a LUN can only take
advantage of the IO bandwidth provided by two 2Gb fiber ports.

Would there be any benefit in creating lots of smaller LUNS ie 10GB,
splitting them equally between SP's and then to create a 100MB volume
on host assign 5 LUNs from SPA and 5 from SPB then stripe across them
using an LVM.

In this scenario IO would be balanced across both SP's utilising
all four fibre ports on the array.

This is not my idea but one of my colleagues, I like it and am
interested to see what everyone here thinks.

Regards

  #2  
Old April 15th 05, 01:58 PM
Jon Metzger
external usenet poster
 
Posts: n/a
Default

bj wrote:
My company has purchased and EMC Clariion CX300 array and we're
currently deciding how best to carve up the LUNS. It will be in a dual
switched fabric environment with two hosts both having dual HBA's, one
into each fabric.

The CX300 has two StorageProcessors each with two fiber ports. A single
LUN can only be owned by one SP which means that a LUN can only take
advantage of the IO bandwidth provided by two 2Gb fiber ports.

Would there be any benefit in creating lots of smaller LUNS ie 10GB,
splitting them equally between SP's and then to create a 100MB volume
on host assign 5 LUNs from SPA and 5 from SPB then stripe across them
using an LVM.

In this scenario IO would be balanced across both SP's utilising
all four fibre ports on the array.

This is not my idea but one of my colleagues, I like it and am
interested to see what everyone here thinks.


This is a pretty common thing to do to spread load. You not only get
(potentially) higher throughput because of more front-end fibre ports,
but you evenly balance the processor load across both SPs. Be careful
not to use your LVM to stripe LUNs that live on the same spindles. For
example, if you had only two Clariion RAID groups, you'd be bettter off
creating one large LUN in each and striping those. If you created 5 in
each RAID group and striped all 10 together you would pay a performance
penalty for disk head seeks. Assuming RAID5, you'll also want to be
careful about the stripe depth you use with your LVM. This will depend
on how wide your RAID group is and your stripe element size. For
sequential writes, you should see better performance if your host can
do full stripe writes.
  #3  
Old April 15th 05, 03:01 PM
bj
external usenet poster
 
Posts: n/a
Default


Jon Metzger wrote:
bj wrote:
My company has purchased and EMC Clariion CX300 array and we're
currently deciding how best to carve up the LUNS. It will be in a

dual
switched fabric environment with two hosts both having dual HBA's,

one
into each fabric.

The CX300 has two StorageProcessors each with two fiber ports. A

single
LUN can only be owned by one SP which means that a LUN can only

take
advantage of the IO bandwidth provided by two 2Gb fiber ports.

Would there be any benefit in creating lots of smaller LUNS ie

10GB,
splitting them equally between SP's and then to create a 100MB

volume
on host assign 5 LUNs from SPA and 5 from SPB then stripe across

them
using an LVM.

In this scenario IO would be balanced across both SP's utilising
all four fibre ports on the array.

This is not my idea but one of my colleagues, I like it and am
interested to see what everyone here thinks.


This is a pretty common thing to do to spread load. You not only get


(potentially) higher throughput because of more front-end fibre

ports,
but you evenly balance the processor load across both SPs. Be

careful
not to use your LVM to stripe LUNs that live on the same spindles.

For
example, if you had only two Clariion RAID groups, you'd be bettter

off
creating one large LUN in each and striping those. If you created 5

in
each RAID group and striped all 10 together you would pay a

performance
penalty for disk head seeks. Assuming RAID5, you'll also want to be
careful about the stripe depth you use with your LVM. This will

depend
on how wide your RAID group is and your stripe element size. For
sequential writes, you should see better performance if your host

can
do full stripe writes.


Superb, thanks for the reply. I've just been reading the EMC document
"EMC CLARiiON Best Practices for Fibre Channel Storage" and it
mirrors exactly what you recommend.

Regards

  #4  
Old April 20th 05, 04:43 PM
external usenet poster
 
Posts: n/a
Default

The CX300 has two StorageProcessors each with two fiber ports. A
single
LUN can only be owned by one SP which means that a LUN can only take
advantage of the IO bandwidth provided by two 2Gb fiber ports.


One point to add here -- if you have purchased the licensed version of
PowerPath, you will be able to load balance between both of your HBAs
- SPs. If you didn't purchase PowerPath, you can still use the
unlicensed (Free/Basic) version, which only does path failover.

  #5  
Old May 9th 05, 07:24 PM
external usenet poster
 
Posts: n/a
Default

Its also possible that more luns require more overhead on the SP. Im
not really sure and would like to hear thoughts on this.

On thing I have learned is that with all theories out there including
different type of data access (sequential vs. randowm) the only way to
know for sure is to do some tests with a stop watch.

Also I know there is point of diminishing returns in using hardware
striping and volume manager striping and then software striping. At
some point it gets to much for the processors and can lead to
thrashing. I wonder if anyone out there has any testing data on this?

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
EMC Clariion puzzle - can only see 1 LUN (sometimes) ohaya Storage & Hardrives 2 October 28th 04 07:39 AM
Using EMC PowerPath for LUNs in Hitachi 9585 storage array ?? FreeDiver Storage & Hardrives 4 August 16th 04 08:36 PM
Solaris + Clariion. Max LUNs. Schmuck Storage & Hardrives 1 April 14th 04 05:36 PM
Who has experience Cloning LUN's with EMC Clariion Louis Storage & Hardrives 8 October 21st 03 06:53 PM
Who has experience Cloning LUN's with EMC Clariion Louis General Hardware 0 October 16th 03 06:59 PM


All times are GMT +1. The time now is 01:35 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.