A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

RAIDing LUNs in a SAN



 
 
Thread Tools Display Modes
  #1  
Old February 9th 06, 12:00 AM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

Hi,

I'm trying to get a basic heads-up on the best approach to distributing
the disks in a SAN to various servers. Hopefully you'll excuse the
newbie-ish nature of all this :-)

To demonstrate with a theoretical example:
If I was trying to divide a 10 disk SAN between 3 servers...2 of which
need RAID 5 arrays and the last requiring RAID 1 I could:

a) allocate 3 disks to server 1, 3 to Server 2 & 4 to server 3 and then
RAID these accordingly

or

b) convert 6 disks to RAID 5, 4 to RAID 1 and then split the RAID 5
array into 2 chunks (1 for each of the first two servers)

It seems to me that *if* both of these are possibl approaches then (b)
gives more usable space to each of the first two servers.
(5/6*diskspace each, rather than 2/3*diskspace)

Does that make sense? Is (b) even possible?

Thanks for any advice or pointers you can give!
Mick

  #2  
Old February 13th 06, 04:37 AM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

I fell into this kind of thinking when I started with my first SAN.
Remember that a benefit of the SAN is flexibility in the way you
provision storage to systems. Don't get locked into thinking that you
must provision discreet raid1/5s on the SAN for individual servers. I
would create a single large raid 5 of the 10 disks (best case is to use
5-9 disks for performance reasons but 10 is OK for many uses). Then I
would carve out LUNs of sufficient size for each server's needs. Then
provision the LUNs to the servers. You lose less disks for parity this
way and it is much more flexible. NOTE that if you require the best
possible IO performance then this would not be an appropriate
configuration.

Steve

  #3  
Old February 13th 06, 09:24 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

In article . com,
SteveRoss wrote:
[...]
You lose less disks for parity this way and it is much more flexible.


Efforts to "lose less disks for parity" are often severely misguided. I
wouldn't be a rich man if I had $50 for every time I'd seen a double-disk
failure within a set (with consequent loss of data) result from such a
misconfiguration, but I would, at least, be able to take a myself and a
friend to dinner somewhere nice. Come to think of it, somewhere more
than just "nice"... very, very nice.

--
Thor Lancelot Simon

"We cannot usually in social life pursue a single value or a single moral
aim, untroubled by the need to compromise with others." - H.L.A. Hart
  #4  
Old February 14th 06, 12:09 AM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

Two words...hot spare. Of course you do not set up any raid groups
without at least one globally available HS disk of the same size (or
greater) per every two enclosures. This will protect you in most of
the VERY rare cases where two disks in the set fail. Having said
that...most things in IT are decided on a risk/reward basis. I
could/should have listed the many other caveats to my above advice
(only gave the one related to IO) so thanks for pointing out this
additional one. In my case I have data that I back up regularly and if
it were to be unavailable for a period of hours (due to the "rare"
double disk failure you reference) I could restore from backup and
tolerate that outage. In this case the benefit of losing less space
outweighs the very remote risk. All of this is very general since each
admin must evaluate the config that best meets their situation.

  #5  
Old February 14th 06, 02:46 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

It's a little hard to know what you're talking about, since you've
decided to violate Usenet etiquette by not quoting any of the text
to which you're responding.

Sorry about that. It was less of a decision and more ignorance of that
necessity. I figured since we are working in a discussion thread that
you could always just scroll back up and read what you said. However,
I can see the courtesy factor of this so I'll try to remember to quote
next time.

A prolonged shutdown (generally caused by power outage, whether

scheduled or unscheduled) takes the whole array offline

I could see that if you:

1. Did not have your array pluged into a standby powersupply
2. Did not have your SPS, DAE-OS, and all non OS DAEs plugged into
redundant power (not just power supplies but redundand circuit as
well).
3. If the redundant circuits were not UPS protected
4. If the UPS was not backed up by a generator that is tested for
power fail-over every 6 months and has a month of fuel supply onsite.
5. If all of the above were not monitored for failure conditions.

Based on the above I am not very concerned about power outages. Is it
still possible? Of course, but so is getting hit by a meteor.

One drive fails, then under the greatly increased I/O load caused by

reconstruction while the array is still under normal use, a second
drive fails before the reconstruction completes.

This is a valid concern, though again it must be weighed against
risk/reward. If my SAN was holding NASDAQ data and my firm was charged
a million per minute for my data to be unavailable then I would design
accordingly. As it is, about half of the 63-ish TB of data that I am
responsible for really does not impact our ability to generate revenue
if it is down for a few hours. In my case for this particular data set
it is an acceptable risk that I MIGHT have 2 disks simultaneously fail.
I could recover from that with little to no issue.

You are very lucky to work in an industry in which taking an array offline

for a day to restore its contents from backup is acceptable. Some
people
don't.

I agree which his why I qualified my statements that each admin must
make the config call based on their data availability requirements.

I THINK we are saying pretty much the same thing and I also feel we are
no longer adding value to poor Mick who originally asked the question,
so unless you feel there is more value you can add I suggest we end
this thread.

  #6  
Old February 14th 06, 06:40 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

Mick wrote:
Hi,

I'm trying to get a basic heads-up on the best approach to distributing
the disks in a SAN to various servers. Hopefully you'll excuse the
newbie-ish nature of all this :-)

To demonstrate with a theoretical example:
If I was trying to divide a 10 disk SAN between 3 servers...2 of which
need RAID 5 arrays and the last requiring RAID 1 I could:

a) allocate 3 disks to server 1, 3 to Server 2 & 4 to server 3 and then
RAID these accordingly

or

b) convert 6 disks to RAID 5, 4 to RAID 1 and then split the RAID 5
array into 2 chunks (1 for each of the first two servers)

It seems to me that *if* both of these are possibl approaches then (b)
gives more usable space to each of the first two servers.
(5/6*diskspace each, rather than 2/3*diskspace)

Does that make sense?


It may, for the specific example you provide: 3-disk RAID-5 sets waste
an unreasonable amount of space, especially given the compromise in
performance which they often entail when compared with mirroring, and
the difference in likelihood of a second whole-disk failure using 6
disks rather than 3 should be relatively negligible (i.e., if you can't
afford the risk of using a 6-disk array, you quite possibly can't afford
it using a 3-disk array).

For larger arrays, it might not be a good trade-off. E.g., creating a
50-disk (or even a 20-disk) RAID-5 array and splitting it up would
entail what many people would consider to be unacceptable risk of a
second whole-disk failure (even when failures are strictly uncorrelated,
which, as Thor pointed out, may not be the case). Besides, if you can
afford to run a non-negligible risk of data loss anyway, you need to
start to question your need for RAID at all.

Furthermore, sharing a single array between multiple servers may
maximize total throughput (if one server's load is lighter, the other
gets the benefit of more disks to spread its own load across) but also
couples each server's performance to the others' load. Sometimes,
that's what you want; others, it isn't.

Finally, as was mentioned recently in another thread here today's disk
sizes carry with them non-negligible risk that in the process of
rebuilding a failed disk you'll encounter an unreadable sector on one of
the survivors, resulting in limited data loss (probably 'only' in a
single file: for some applications this is an entirely tolerable risk
to run, e.g., if most files could easily be restored from a backup and
the array will continue with the rest of the rebuild rather than throw a
fit if it can't restore one sector; for others, *any* loss could be
catastrophic).

To use your own numbers, the chance that a disk will fail over the
5-year nominal service life of a 6-disk RAID-5 array varies from under
20% (using 1.4 million hour MTBF drives) to about 40% (using 600,000
hour MTBF drives) - though proactive replacement based on, e.g.,
S.M.A.R.T. logs might improve those odds. Obviously, the chance that a
second disk will experience an uncorrelated failure during the brief
rebuild interval is extremely small (though, once again, not all failure
modes are uncorrelated).

But the 5 survivors could contain from 370 GB to 2.5 TB of data, all of
which must be used to reconstruct the failed disk. If the uncorrectable
error rate is 1 per 10^14 bits (about 1 in 10 TB), that means you have
something between a 4% and 25% chance that the reconstruction will fail
to recreate something - likely a *far* greater probability than that of
a second whole-disk failure (though, again, you may be able to mitigate
this risk if the array performs background 'scrubbing' activity to
detect failing sectors before they become completely unreadable).

That's one reason why RAID-6 (double-parity protection that can tolerate
concurrent failures of any *two* disks) is gaining in popularity these days.

- bill
  #7  
Old February 14th 06, 07:54 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

In article .com,
SteveRoss wrote:

A prolonged shutdown (generally caused by power outage, whether
scheduled or unscheduled) takes the whole array offline


I could see that if you:

1. Did not have your array pluged into a standby powersupply


[...]

What part of "scheduled or unscheduled" didn't you read?

The problem is that spinning an array down _for whatever reason_ -- even
if part of planned maintenance -- brings with it a not insignificant risk
that each drive won't spin back up. When you do spin the drives back up,
it is not uncommon in my experience that if one doesn't spin back up, all
having been, typically, manufactured and sold at the same time and run
under the same conditions since then, two don't. In my experience this
kind of simultaneous double-disk failure is _more_ common than the
convenient sort of neatly-sequential, oh-by-the-way-weren't-we-lucky-we
had-time-to-do-a-full-rebuild-in-between double-disk failure that a hot
spare provides protection from.

Spindles are cheap -- even spindles that have to go into slots in arrays
whose vendors have artifically limited the number of such slots to provide
model differentiation across their product line -- when compared to the
cost of shutting a whole business down to recover terabytes of data from
backups. Large RAID 5 sets with no other redundancy are, in my opinion,
almost always a false economy.

--
Thor Lancelot Simon

"We cannot usually in social life pursue a single value or a single moral
aim, untroubled by the need to compromise with others." - H.L.A. Hart
  #8  
Old February 15th 06, 03:14 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

Your posts seem to me to be degrading quickly in value such that they
have become mere ranting. It is always frustrating to deal with
single-minded zealots so this will be my last post in the thread. BTW,
You may want to read your signature line...you could learn a lot from
it.

Steve

  #9  
Old April 1st 06, 05:28 AM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default RAIDing LUNs in a SAN

Hey Mick,

Depending on what "disk array" you have attached to the SAN, you should be
able to create a raid 5 "disk group" and a raid 1 "disk group" and the carve
LUNs from them to allocate to any array you choose. Most arrays do not limit
you to having to allocate whole disks to one server but allow you to group
them together in a RAID group and then allocate a portion of the total
usable space to specific hosts.

I hope this helps

Rick



"Mick" wrote in message
ups.com...
Hi,

I'm trying to get a basic heads-up on the best approach to distributing
the disks in a SAN to various servers. Hopefully you'll excuse the
newbie-ish nature of all this :-)

To demonstrate with a theoretical example:
If I was trying to divide a 10 disk SAN between 3 servers...2 of which
need RAID 5 arrays and the last requiring RAID 1 I could:

a) allocate 3 disks to server 1, 3 to Server 2 & 4 to server 3 and then
RAID these accordingly

or

b) convert 6 disks to RAID 5, 4 to RAID 1 and then split the RAID 5
array into 2 chunks (1 for each of the first two servers)

It seems to me that *if* both of these are possibl approaches then (b)
gives more usable space to each of the first two servers.
(5/6*diskspace each, rather than 2/3*diskspace)

Does that make sense? Is (b) even possible?

Thanks for any advice or pointers you can give!
Mick



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
adaptec 2200S luns pko4444 General 0 May 10th 05 03:08 PM
EMC Clariion LUNs bj Storage & Hardrives 4 May 9th 05 07:24 PM
Using EMC PowerPath for LUNs in Hitachi 9585 storage array ?? FreeDiver Storage & Hardrives 4 August 16th 04 08:36 PM
Maximum numbers of LUNs for a Linux box fx Storage & Hardrives 0 August 5th 04 03:21 PM
Solaris + Clariion. Max LUNs. Schmuck Storage & Hardrives 1 April 14th 04 05:36 PM


All times are GMT +1. The time now is 03:19 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.