A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

I want to build a 1.5TB storage array for MythTV



 
 
Thread Tools Display Modes
  #11  
Old August 6th 04, 08:45 PM
Joe Smith
external usenet poster
 
Posts: n/a
Default

Scott Lurndal wrote:

FTP is hardly a useful benchmark for determining the peak ethernet
performance anyway. Try with something lower overhead like rcp.


Huh? Once you've gotten past the dialog that starts the transfer,
ftp has no level 3 overhead: it sends just data bytes until EOF.

-Joe
  #12  
Old August 7th 04, 08:45 PM
external usenet poster
 
Posts: n/a
Default

"J. Clarke" wrote in message ...
You might want to check the CompUSA site--they were having a deal for 200
gig drives for about 90 bucks counting a mail-in rebate, but I don't know
if it's still on.


The last time I saw one of these rebates, it was limited to one per
household unfortunately.

* Accordingly, get a HighPoint SATA RAID card instead of the specified
RocketRAID 454 ATA RAID card. I think the RocketRAID 1640


Personally for a RAID that size I'd go for a 3Ware or LSI Logic. No
particular reason, just that I'm used to terabytes being mainframe
territory and I get nervous with consumer RAID controllers trying to handle
that much data.


I think, from the link he mentioned, that he intends to use the Linux
md driver. (software RAID). Most of those cheap RAID cards are not
hardware RAID at all. They advertise it falsely and give you a Windows
driver that does software RAID.

Nothing is worse than cheap RAID. If you're not using something solid,
even if it does really do hardware RAID, your best bet is software
RAID. The Linux software RAID is better-tested and more widely used
(by people who will notice if something went wrong and get it fixed or
at least complain) than any of the cheapo RAID or fakeraid cards.

Bigger issues than the size of the RAID group are things like, what
happens if a drive fails? This can be very hard to test without
special HD firmware made for this purpose. I've seen cheap hardware
RAIDs where yanking the drive out seems to fail it fine, but when a
drive fails for real, the errors get passed through to the OS.

Or how does the RAID behave when running into bad blocks on a "good"
drive, while resyncing onto a hot-spare or a replaced disk. I've seen
even hugely expensive "enterprise" solutions fail miserably here.

Or how does the RAID behave when it sees a double-disk failure? Even
some high-end solutions give you no way to recover. Obviously if both
drives have really failed, you're screwed. But what if a power
connection gets knocked loose?
  #13  
Old August 7th 04, 10:26 PM
Jeff Rife
external usenet poster
 
Posts: n/a
Default

) wrote in alt.video.ptv.tivo:
Or how does the RAID behave when it sees a double-disk failure? Even
some high-end solutions give you no way to recover. Obviously if both
drives have really failed, you're screwed. But what if a power
connection gets knocked loose?


I had this exact situation with Linux software RAID-5 (2.6 kernel), and
a simple remove and re-add of the drives caused the re-build to start, and
I lost no data.

I'm trying to figure out how to get a spare onto the array to avoid even
this sort of problem, but although the software can do it, I don't have any
more room in the case.

--
Jeff Rife |
SPAM bait: | http://www.nabs.net/Cartoons/UserFri...munication.gif
|
|
  #14  
Old August 8th 04, 12:25 AM
Yeechang Lee
external usenet poster
 
Posts: n/a
Default

wrote:
The last time I saw one of these rebates, it was limited to one per
household unfortunately.


You may be right; hopefully I will be able to find a bona fide price
cut soon. As mentioned, even $170 (the pre-rebate price) isn't bad.

I think, from the link he mentioned, that he intends to use the Linux
md driver. (software RAID). Most of those cheap RAID cards are not
hardware RAID at all.


I definitely get the sense through research that modern Linux software
RAID is superior to the consumer-grade RAID cards. That said, if I
follow the approach outlined at URL:http://www.finnie.org/terabyte/,
I will be using both the RcoketRAID card's RAID 5 *and* software RAID
0. But if Finnie is mistaken and just software RAID is sufficient and
more reliable, I'd go that way.

Or how does the RAID behave when it sees a double-disk failure? Even
some high-end solutions give you no way to recover. Obviously if
both drives have really failed, you're screwed. But what if a power
connection gets knocked loose?


Reliability is important, of course, but my desire has limits. Since
the array would be used to hold video files, of course I want some
degree of redundancy (as a RAID 50 arrangement apparently
provides). That said, video files aren't *so* important as to
necessitate in my mind extraordinary redundancy; that's why I'm
willing to take the risk of two disks going bad at once, and why I'm
not even going to bother trying to backup a 1.5TB array.

--
Read my Deep Thoughts @ URL:http://www.ylee.org/blog/ PERTH ---- *
Cpu(s): 8.7% us, 5.2% sy, 85.7% ni, 0.0% id, 0.0% wa, 0.3% hi, 0.0% si
Mem: 516960k total, 512916k used, 4044k free, 14452k buffers
Swap: 2101032k total, 1070004k used, 1031028k free, 135152k cached
  #15  
Old August 8th 04, 02:06 AM
Wolf
external usenet poster
 
Posts: n/a
Default

"Jeff Rife" wrote in message
...
) wrote in alt.video.ptv.tivo:
Or how does the RAID behave when it sees a double-disk failure? Even
some high-end solutions give you no way to recover. Obviously if both
drives have really failed, you're screwed. But what if a power
connection gets knocked loose?


I had this exact situation with Linux software RAID-5 (2.6 kernel), and
a simple remove and re-add of the drives caused the re-build to start, and
I lost no data.

I'm trying to figure out how to get a spare onto the array to avoid even
this sort of problem, but although the software can do it, I don't have

any
more room in the case.


If you are going to do software RAID on the 2.6 kernel anyway, why not
check into RAID6(?) that is supposed to have the equiv of two parity drive?
This is something I am going to look into soon.

--
Angel R. Rivera aka Wolf
----------------------------------------------------------------
Please post all reponses to UseNet. All email cheerfully and automagically
routed to Dave Null


  #16  
Old August 15th 04, 05:36 PM
Will Dormann
external usenet poster
 
Posts: n/a
Default

Will Dormann wrote:

I'm doing a cheap-o scaled down version of what you describe. As soon
as my drives come in I'll let you know. I have a feeling that the
latency will be increased when seeking through a recording, but other
than that might be OK.


Ok, I've got my NAS machine set up and it works quite well. I figured
I'd share my notes if anybody's interested. Here's the description of
the machine:

AMD K6-III 450
256MB RAM
3x Samsung 120GB IDE drives
Cheap SiI0680 IDE controller
Cheap 100Mb NIC (Realtek 8139)

Gentoo Linux using kernel 2.6.7

BOOT ext3 on RAID1 with hot spare
ROOT xfs on RAID1 with hot spare
STORE xfs on RAID5

Gentoo 2004.2 wouldn't recognize the 0680 IDE controller on boot. I
could modprobe the module for it and it seemed to be detected ok
according to dmesg. But I could not access any of the attached drives.
Perhaps I missed something other than doing the modprobe to make the
drives accessible? So I used Gentoo 2004.1 and that worked fine. I
had to manually modprobe the 8139too module, but after that the
networking was fine.

When using the Gentoo Live CD, the system was stable. But after
installation and booting from the hard drives, it was not. It would
often hang on drive access, giving "Lost Interrupt" errors for the
devices on the SiI0680. This was with both 2.4 and 2.6 kernels. I
finally tracked it down to a BIOS setting for the PCI interrupt. It was
set to "Edge", but changing it to "Level" fixed the problem. Why I
didn't see any issues when booting from the live CD, I'm not sure...

Originally I had the ROOT and STORE partitions set up as JFS. Perhaps
caused by the couple of system hangs, the ROOT partition got corrupt.
The symptoms I noticed were that the filesystem would seem to randomly
switch to read-only mode, requiring me to reboot. In the kernel log
were the errors:

Aug 14 15:15:51 [kernel] ERROR: (device md2): stack overrun in dtSearch!
Aug 14 15:15:51 [kernel] btstack dump:
Aug 14 15:15:51 [kernel] bn = 0, index = 0
- Last output repeated 6 times -
Aug 14 15:15:51 [kernel] bn = cffdc960c015dcfb, index = 208

I ran jfs_fsck and forced it to scan the whole filesystem, but that
didn't cause the above issue to disappear. Figuring that jfs wasn't
quite ready for prime time, I decided to switch to xfs instead. Using
the gentoo live CD and my "hot spare" drive, I copied the contents of
the ROOT partition, formatted md2 with xfs, and then copied the contents
back.

Now, how it works....

The K6-III 450 works fine for this purpose, at 100Mbit ethernet speeds.
I don't think I'd want anything slower, though. I'm currently
copying over my MythTV recordings (very large files) via NFS and the CPU
usage is around 60%. Almost 10% of this is from the md3_raid5
process, and the rest is nfsd. When copying over files from my windows
machine via Samba, the CPU usage was around 40% I believe. I haven't
done any special tuning or benchmarking, so I'm not sure if NFS is less
efficient than Samba, or if it's just moving the data faster.

The machine works great as a MythTV store. I see no latency problems at
all when skipping back and forth through a recording. And the MythTV
machine will probably run cooler now due to the decrease in disk
activity. I currently have it set up to have the "live" TV buffer on
the local disk, and the recording on the NAS. In the setup for MythTV,
the settings for these two locations are separate, which is cool.

Now, I wonder how long it'll be before I fill it up! At least the
price was right, with the total out-of-pocket cost being about $250


-WD
  #17  
Old August 15th 04, 08:53 PM
Will Dormann
external usenet poster
 
Posts: n/a
Default

Will Dormann wrote:
I'm currently copying
over my MythTV recordings (very large files) via NFS and the CPU usage
is around 60%. Almost 10% of this is from the md3_raid5 process, and
the rest is nfsd. When copying over files from my windows machine via
Samba, the CPU usage was around 40% I believe. I haven't done any
special tuning or benchmarking, so I'm not sure if NFS is less efficient
than Samba, or if it's just moving the data faster.


I'm doing some copying of similar files over Samba now, and the CPU
usage is between 60% and 70%. So it appears that NFS and Samba are
pretty close, CPU-wise.


-WD
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
How I built a 2.8TB RAID storage array Yeechang Lee Homebuilt PC's 31 February 22nd 05 06:40 PM
I want to build a 2.8TB storage array Yeechang Lee Homebuilt PC's 21 January 12th 05 01:00 AM
RAID Array "Off Line" on P4C800-E Deluxe macleme Asus Motherboards 4 September 1st 04 07:22 PM
Using EMC PowerPath for LUNs in Hitachi 9585 storage array ?? FreeDiver Storage & Hardrives 4 August 16th 04 08:36 PM
help with motherboard choice S.Boardman Overclocking AMD Processors 30 October 20th 03 10:23 PM


All times are GMT +1. The time now is 08:03 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.