A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

How I built a 2.8TB RAID storage array



 
 
Thread Tools Display Modes
  #11  
Old February 20th 05, 08:45 PM
dg
external usenet poster
 
Posts: n/a
Default

I need to stay away from this thread for a while, I am starting to feel some
inspiration. It has been some time since I have run Linux, and well, to be
honest I have always had an urge to build a functional linux box for myself.
And raid fascinates me, so, well, I need to stop reading this stuff. I
can't afford a new toy now.

--Dan

"Yeechang Lee" wrote in message
...
Great project by the way.


Thank you. It's still amazes me to see that little '2.6T' label appear
in the 'df -h' output.



  #12  
Old February 20th 05, 09:26 PM
Eric Gisin
external usenet poster
 
Posts: n/a
Default

"Yeechang Lee" wrote in message
...

CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.

No, one PCI-X card would be just as good.

You don't mention the ethernet card, which could also be PCI-X.

SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).

Is there a comparison of Linux RAID 5 to top-end RAID cards? I suspect 3Ware is
better.

  #13  
Old February 20th 05, 10:03 PM
John-Paul Stewart
external usenet poster
 
Posts: n/a
Default

Eric Gisin wrote:
"Yeechang Lee" wrote in message
...

CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.


No, one PCI-X card would be just as good.


Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.
So if those two cards are in two slots on one PCI-X bus, that's not
distributing the bandwidth at all. The motherboard may offer multiple
PCI-X busses, in which case the OP may want to ensure the cards are in
slots that correspond to different busses. The built-in NIC on most
motherboards (along with most other built-in devices) are also on one
(or more) of the PCI busses, so consider bandwidth used by those as well
when distributing the load.
  #14  
Old February 21st 05, 12:05 AM
Folkert Rienstra
external usenet poster
 
Posts: n/a
Default

"Eric Gisin" wrote in message
"Yeechang Lee" wrote in message ...

CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.

No, one PCI-X card would be just as good.


Probably, yes.
Depends on what PCI-X (version, clock) and whether the slots are
seperate PCI buses or not.

If seperate buses the highest clock is atainable and they both have the
full PCI-X bandwidth, say 1GB/s (133MHz) or 533 MB/s (66MHz)
If on same bus, the clock is lower to start with and they have to share
that bus PCI-X bandwidth, say a still plenty 400MB/s each (100MHz)
but may become iffy in case of 66MHz clock (266MB/s) or even 50MHz.


You don't mention the ethernet card, which could also be PCI-X.


What if?


SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).

Is there a comparison of Linux RAID 5 to top-end RAID cards?
I suspect 3Ware is better.



  #15  
Old February 21st 05, 04:02 AM
Yeechang Lee
external usenet poster
 
Posts: n/a
Default

John-Paul Stewart wrote:
No, one PCI-X card would be just as good.


Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.


The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
to each PCI-X slot, thus my desire to spread out the load with two
cards. Otherwise I'd have gone with the 7506-8 eight-channel card
instead and saved about $120.

The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?

--
Read my Deep Thoughts @ URL:http://www.ylee.org/blog/ PERTH ---- *
Cpu(s): 5.6% us, 5.4% sy, 0.2% ni, 73.9% id, 10.4% wa, 4.6% hi, 0.0% si
Mem: 515800k total, 511808k used, 3992k free, 1148k buffers
Swap: 2101032k total, 240k used, 2100792k free, 345344k cached
  #16  
Old February 21st 05, 04:08 AM
dg
external usenet poster
 
Posts: n/a
Default

"Yeechang Lee" wrote in message
...
The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?


When you DO go gigabit, be sure to at least do some basic throughput
benchmarks (even if its just with a stopwatch, but I suspect you will come
up with a good method) and then compare afterwards. That is really good
data to get firsthand from somebody with such an extreme array and well
documented hardware and software setup. Really good stuff! I wonder what
kind of data rates that array is capable of within the machine too.
Somewhere there is a guy claiming to get 90+MB per second over gigabit
ethernet using raid arrays on both ends.

Gigabit switches are getting so cheap its incredible.

--Dan


  #17  
Old February 21st 05, 04:11 AM
Yeechang Lee
external usenet poster
 
Posts: n/a
Default

Eric Gisin wrote:
Is there a comparison of Linux RAID 5 to top-end RAID cards? I
suspect 3Ware is better.


No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,
URL:http://www.chemistry.wustl.edu/~gelb/castle_raid.html (which
does note that software striping two 3Ware hardware RAID 5 solutions
"might be competitive" with software) and
URL:http://staff.chess.cornell.edu/~schuller/raid.html (which states
that no, all-software still has the edge in such a scenario).

--
Read my Deep Thoughts @ URL:http://www.ylee.org/blog/ PERTH ---- *
Cpu(s): 5.6% us, 5.6% sy, 0.3% ni, 72.2% id, 11.9% wa, 4.5% hi, 0.0% si
Mem: 515800k total, 512004k used, 3796k free, 37608k buffers
Swap: 2101032k total, 240k used, 2100792k free, 293748k cached
  #18  
Old February 21st 05, 04:54 AM
Thor Lancelot Simon
external usenet poster
 
Posts: n/a
Default

In article ,
Yeechang Lee wrote:
Eric Gisin wrote:
Is there a comparison of Linux RAID 5 to top-end RAID cards? I
suspect 3Ware is better.


No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,


If all you care about is "rod length check" long-sequential-read or
long-sequential-write performance, that's probably true. If, of
course, you restrict yourself to a single stream...

....of course, in the real world, people actually do short writes and
multi-stream large access every once in a while. Software RAID is
particularly bad at the former because it can't safely gather writes
without NVRAM. Of course, both software implementations *and* typical
cheap PCI RAID card (e.g. 3ware 7/8xxx) implementations are pretty
awful at the latter, too, and for no good reason that I could ever see.

--
Thor Lancelot Simon

"The inconsistency is startling, though admittedly, if consistency is to be
abandoned or transcended, there is no problem." - Noam Chomsky
  #19  
Old February 21st 05, 06:48 AM
Steve Wolfe
external usenet poster
 
Posts: n/a
Default

No, one PCI-X card would be just as good.

Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.


The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
to each PCI-X slot, thus my desire to spread out the load with two
cards. Otherwise I'd have gone with the 7506-8 eight-channel card
instead and saved about $120.

The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?


The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems awfully slow
for this sort of setup.

As a comparison, I have two machines with software RAID 5 arrays, one a
2x866 P3 system with 5x120-gig drives, the other an A64 system with 8x300
gig drives, and both of them can read and write to/from their RAID 5 array
at 45+ MB/s, even with the controller cards plugged into a single 32/33 PCI
bus.

To answer your question, GigE at full speed is a bit more than 100
MB/sec. The PCI-X busses on that motherboard are both capable of at least
100 MHz operation, which at 64 bits would give you a max *realistic*
throughput of about 500 MB/second, so any performance detriment from using
the gigE would likely be completely insignificant.

I've got another machine with a 3Ware 7000-series card with a bunch of
120-gig drives on it (I haven't looked at the machine in quite a while), and
I was pretty disappointed with the performance from that controller. It
works for the intended usage (point-in-time snapshots), but responsiveness
of the machine under disk I/O is pathetic - even with dual Xeons.

steve


  #20  
Old February 21st 05, 08:08 AM
Yeechang Lee
external usenet poster
 
Posts: n/a
Default

Steve Wolfe wrote:
The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems
awfully slow for this sort of setup.


Agreed. However, those benchmarks were done with no tuning whatsoever
(and, as noted, the three distributed computing projects going full
blast); since then I've done some minor tweaking, notably the noatime
mount option, which has helped. I'd post newer benchmarks but the
array's right now rebuilding itself due to a kernel panic I caused by
trying to use smartctl to talk to the bare drives without invoking the
special 3ware switch.

To answer your question, GigE at full speed is a bit more than
100 MB/sec. The PCI-X busses on that motherboard are both capable
of at least 100 MHz operation, which at 64 bits would give you a max
*realistic* throughput of about 500 MB/second, so any performance
detriment from using the gigE would likely be completely
insignificant.


That was my sense as well; I suspect network saturation-by-disk will
only cease to be an issue when we all hit the 10GigE world.

(Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
advantage of the theoretical bandwidth available on the slots,
anyway.)

I've got another machine with a 3Ware 7000-series card with a bunch of
120-gig drives on it (I haven't looked at the machine in quite a
while), and I was pretty disappointed with the performance from that
controller.


Appreciate the report. Fortunately, as a home user performance (or
given that I'm only recording TV episodes, even data integrity
actually; thus no backup plans for the array, even if backing up 2.8TB
was practical in any way budgetwise) isn't my prime
consideration. Were I after that, I'd probably have gone with the
9000-series controllers and SATA drives, but my wallet's busted enough
with what I already have!

--
Read my Deep Thoughts @ URL:http://www.ylee.org/blog/ PERTH ---- *
Cpu(s): 4.7% us, 3.2% sy, 0.3% ni, 75.7% id, 14.0% wa, 2.0% hi, 0.0% si
Mem: 515800k total, 510704k used, 5096k free, 18540k buffers
Swap: 2101032k total, 240k used, 2100792k free, 305484k cached
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID Array "Off Line" on P4C800-E Deluxe macleme Asus Motherboards 4 September 1st 04 07:22 PM
How Create SATA RAID 1 with current install? Mr Mister Asus Motherboards 8 July 25th 04 10:46 PM
How to set up RAID 0+1 on P4C800E-DLX MB -using 4 SATA HDD's & 2 ATA133 HHD? Data Wing Asus Motherboards 2 June 5th 04 03:47 PM
help with motherboard choice S.Boardman Overclocking AMD Processors 30 October 20th 03 10:23 PM
help. ga-7vrxp raid trouble, compatability and warning todd elliott General 0 July 17th 03 06:50 PM


All times are GMT +1. The time now is 09:43 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.