A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

A few questions before assembling Linux 7.5TB RAID 5 array



 
 
Thread Tools Display Modes
  #1  
Old December 21st 06, 07:49 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Yeechang Lee
external usenet poster
 
Posts: 12
Default A few questions before assembling Linux 7.5TB RAID 5 array

I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its "hardware" fakeraid). The array will be used to store and
serve locally and via gigabit Ethernet large, mostly high-definition
video recordings (up to six or eight files being written to and/or
read from simultaneously, as I envision it). The smallest files will
be 175MB-700MB, the largest will be 25GB+, and most files will be from
4GB to 12GB with a median of about 7.5GB. I plan on using JFS as the
filesystem, without LVM.

A few performance-related questions:

* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?
* Should I stick with the default of 0.4% of the array as given over
to the JFS journal? If I can safely go smaller without a
rebuilding-performance penalty, I'd like to. Conversely, if a larger
journal is recommended, I can do that.
* I'm wondering whether I should have ordered two RocketRAID 2220
(each with eight SATA ports) instead of the 2240. Would two cards,
each in a PCI-X slot, perform better? I'll be using the Supermicro
X7DVL-E
(URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm)
as the motherboard.

--
URL:http://www.pobox.com/~ylee/ PERTH ---- *

Homemade 2.8TB RAID 5 storage array:
URL:http://groups.google.ca/groups?selm=slrnd1g04a.5mt.ylee%40pobox.com
  #2  
Old December 21st 06, 07:54 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
General Schvantzkoph
external usenet poster
 
Posts: 246
Default A few questions before assembling Linux 7.5TB RAID 5 array

On Thu, 21 Dec 2006 18:49:10 +0000, Yeechang Lee wrote:

I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its "hardware" fakeraid). The array will be used to store and
serve locally and via gigabit Ethernet large, mostly high-definition
video recordings (up to six or eight files being written to and/or
read from simultaneously, as I envision it). The smallest files will
be 175MB-700MB, the largest will be 25GB+, and most files will be from
4GB to 12GB with a median of about 7.5GB. I plan on using JFS as the
filesystem, without LVM.

A few performance-related questions:

* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?
* Should I stick with the default of 0.4% of the array as given over
to the JFS journal? If I can safely go smaller without a
rebuilding-performance penalty, I'd like to. Conversely, if a larger
journal is recommended, I can do that.
* I'm wondering whether I should have ordered two RocketRAID 2220
(each with eight SATA ports) instead of the 2240. Would two cards,
each in a PCI-X slot, perform better? I'll be using the Supermicro
X7DVL-E
(URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm)
as the motherboard.


For a system that large wouldn't you be better off with a 3Ware controller
which is a real RAID controller rather that the Highpoints which aren't?
  #3  
Old December 21st 06, 09:08 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Scott Alfter
external usenet poster
 
Posts: 52
Default A few questions before assembling Linux 7.5TB RAID 5 array

In article ,
Yeechang Lee wrote:
* I'm wondering whether I should have ordered two RocketRAID 2220
(each with eight SATA ports) instead of the 2240. Would two cards,
each in a PCI-X slot, perform better? I'll be using the Supermicro
X7DVL-E


I wouldn't think so. Unless the PCI-X slots you intend to use are on
separate busses (not likely), the two cards will contend for the same amount
of bandwidth. Whether data for the drives gets funnelled through one slot
or two shouldn't make a difference.

With PCI Express, each slot gets its own dedicated chunk of bandwidth to
the northbridge. The motherboard you're considering has a couple of PCI-E
slots (one with 8 lanes and another with 4 lanes). Since you were already
looking at HighPoint controllers, a couple of RocketRAID 2320s might've been
the better way to go (as long as you weren't planning on using those slots
for something else).

_/_
/ v \ Scott Alfter (remove the obvious to send mail)
(IIGS( http://alfter.us/ Top-posting!
\_^_/ rm -rf /bin/laden What's the most annoying thing on Usenet?

  #4  
Old December 21st 06, 09:52 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Steve Cousins
external usenet poster
 
Posts: 51
Default A few questions before assembling Linux 7.5TB RAID 5 array



Yeechang Lee wrote:

A few performance-related questions:

* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?


It is best to try a number of different configurations and benchmark
each one to see how it works with your needs. For my needs I've mainly
used 64 KB stripes because it gave better performance than 128 or
higher. Make sure you match the file system chunk size to the RAID
stripe size too.

* Should I stick with the default of 0.4% of the array as given over
to the JFS journal? If I can safely go smaller without a
rebuilding-performance penalty, I'd like to. Conversely, if a larger
journal is recommended, I can do that.


I'd probably just keep it at the defaults.

* I'm wondering whether I should have ordered two RocketRAID 2220
(each with eight SATA ports) instead of the 2240. Would two cards,
each in a PCI-X slot, perform better? I'll be using the Supermicro
X7DVL-E
(URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm)
as the motherboard.



As Scott mentioned, since it looks like this MB has both slots sharing
the PCI-X bus it probably wouldn't help to separate them out unless the
architecture of the card has limitations. Even if they were separate
buses though I don't think it would help you to have two cards since the
bandwidth of the bus exceeds the needs of the drives.

For this many SATA drives I would hope that you are going with RAID6 and
a hot-spare.

Steve


  #5  
Old December 21st 06, 10:28 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Yeechang Lee
external usenet poster
 
Posts: 12
Default A few questions before assembling Linux 7.5TB RAID 5 array

Steve Cousins wrote:
* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?


It is best to try a number of different configurations and benchmark
each one to see how it works with your needs. For my needs I've mainly
used 64 KB stripes because it gave better performance than 128 or
higher.


I figured as much, but was hoping that someone else would say "Hey, in
my experience ___KB chunks are best for your situation, and I'd raise
the chunk size ___KB for every terabyte bigger." I guess there's just
no way around manually building and rebuilding the array a few times,
unless the performance with each chunk-size step relative to with
others is the same as while the array is still dirty and being built
up for the first time and once the array is finished.

Make sure you match the file system chunk size to the RAID stripe
size too.


I don't think this is an issue with JFS; that is, mkfs.jfs doesn't
offer any such options in the first place.

For this many SATA drives I would hope that you are going with RAID6
and a hot-spare.


Undecided. While the recordings would be inconvenient to lose, it
would not be life-or-death. I suspect I'll end up doing RAID 6 but no
hot spare.

In my previous such array (see below) I went to the trouble of buying
an extra drive for cold swap which, naturally, hasn't ever been
needed. Given the enterprise-class Western Digital drives I'm using
this time I shouldn't have any trouble hunting down an exact spare or
two in three or five years' time; worst comes to worst I'd just buy a
750GB for whatever ridiculously-low price they sell for then and just
not use the extra space in the array.

--
URL:http://www.pobox.com/~ylee/ PERTH ---- *

Homemade 2.8TB RAID 5 storage array:
URL:http://groups.google.ca/groups?selm=slrnd1g04a.5mt.ylee%40pobox.com
  #6  
Old December 22nd 06, 09:42 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Vegard Svanberg
external usenet poster
 
Posts: 3
Default A few questions before assembling Linux 7.5TB RAID 5 array

On 2006-12-21, Yeechang Lee wrote:

I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its "hardware" fakeraid).

[snip]

What kind of enclosure/cabinet do you use for this setup? Does it have
hot-swap drive bays?

--
Vegard Svanberg [*Takapa@IRC (EFnet)]

  #7  
Old December 22nd 06, 10:59 AM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Yeechang Lee
external usenet poster
 
Posts: 12
Default A few questions before assembling Linux 7.5TB RAID 5 array

Vegard Svanberg wrote:
What kind of enclosure/cabinet do you use for this setup? Does it have
hot-swap drive bays?


Yes. It's a 4U Chenbro rackmount.

--
URL:http://www.pobox.com/~ylee/ PERTH ---- *

Homemade 2.8TB RAID 5 storage array:
URL:http://groups.google.ca/groups?selm=slrnd1g04a.5mt.ylee%40pobox.com
  #8  
Old December 22nd 06, 12:34 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Conor
external usenet poster
 
Posts: 562
Default A few questions before assembling Linux 7.5TB RAID 5 array

In article , Yeechang Lee says...
Steve Cousins wrote:
* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?


It is best to try a number of different configurations and benchmark
each one to see how it works with your needs. For my needs I've mainly
used 64 KB stripes because it gave better performance than 128 or
higher.


I figured as much, but was hoping that someone else would say "Hey, in
my experience ___KB chunks are best for your situation, and I'd raise
the chunk size ___KB for every terabyte bigger." I guess there's just
no way around manually building and rebuilding the array a few times,
unless the performance with each chunk-size step relative to with
others is the same as while the array is still dirty and being built
up for the first time and once the array is finished.

Sadly, no as usage and typical filesize play a large part in it and no
two arrays are going to be used the same.


--
Conor

"You're not married,you haven't got a girlfriend and you've never seen
Star Trek? Good Lord!" - Patrick Stewart
  #9  
Old December 22nd 06, 03:15 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Guy Dawson
external usenet poster
 
Posts: 24
Default A few questions before assembling Linux 7.5TB RAID 5 array

Yeechang Lee wrote:
I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its "hardware" fakeraid).


How long are you expecting a rebuild to take in the event of a
disk failure? You may well be better off creating a bunch of smaller
5 disk RAID5 arrays rather than one big one.

An aside - we've just taken delivery of an EMC CX300 storage system.
We've configured a RAID 5 array with 15 146GB Fibre channel disks and
a hot spare. We've just pulled one of the disks from the array and
are watching the rebuild take place. I'll let you know how long it
takes!

Guy
-- --------------------------------------------------------------------
Guy Dawson I.T. Manager Crossflight Ltd

  #10  
Old December 22nd 06, 05:31 PM posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware
Steve Cousins
external usenet poster
 
Posts: 51
Default A few questions before assembling Linux 7.5TB RAID 5 array

Yeechang Lee wrote:

Steve Cousins wrote:


Make sure you match the file system chunk size to the RAID stripe
size too.



I don't think this is an issue with JFS; that is, mkfs.jfs doesn't
offer any such options in the first place.



OK. I've never used JFS. XFS has worked really well for us. One nice
thing when testing different configurations is that the file system
creates very quickly. mkfs.xfs also can figure out the chunk size
automatically if you use Linux software RAID. If you do go with RAID6
and a hot spare though make sure you use a very new version of the xfs
tools because I found a bug with it not using the correct chunk size.
The hot spare was throwing it off. They fixed it for me and I believe
it is in the latest version.

Another thing that I ran into is that if you ever want to do a xfs_check
on a volume this big it takes a lot of memory and/or swap space. On a 5
TB RAID array it was always crashing. I have 3 GB of RAM on that
machine and it wasn't enough. I ended up adding a 20 GB swap file to
the 3 GB swap partition and that allowed xfs_check to work. I don't
know if JFS has the same memory needs but it is worth checking out
before you need to run it for real.

Good luck and Happy Holidays,

Steve

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID 1 vs RAID 5 and to the bottom of it ! John Storage (alternative) 12 September 21st 06 10:55 PM
RAID 1 to RAID 0 migration issues D Intel 2 December 5th 05 09:53 PM
How I built a 2.8TB RAID storage array Yeechang Lee Storage (alternative) 42 March 3rd 05 01:04 AM
help with motherboard choice S.Boardman General 30 October 20th 03 10:23 PM
RAID-1 reliability marcodeo Storage (alternative) 26 August 30th 03 09:53 PM


All times are GMT +1. The time now is 02:07 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.