A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

RAID 5 again - budget hardware solution



 
 
Thread Tools Display Modes
  #1  
Old September 19th 04, 05:57 PM
Meurig Freeman
external usenet poster
 
Posts: n/a
Default RAID 5 again - budget hardware solution

As a home user I am in need of a budget solution. As a RAID newbie I'm
in need of a little bit of advice.

The background - I need to get a lot of data onto a lot of drives as
quickly as possible, and it has to be done via 100Mbps ethernet. The
ethernet is my current bottleneck and I intend to overcome it by
connecting three 100Mbps devices via a gigabit switch to a gigabit card
on a server. Thereby enabling me to fill three hard disks at once (I hope).

I foresee the HD being the next limitation, as such I intend to buy a
Promise FastTrak S150 SX4 controller and use a RAID 5 set up with four
WD Caviar 200GB SATA150 8MB 7200rpm drives and so get 600GB of storge
with some protection from the additional drive.

This is going into my desktop machine, a 2.4GHz system running WinXP pro
with a Gigabyte 8INXP motherboard (32bit 33MHz PCI bus). It's not a
dedicated server so I'd like to take some load off the CPU by using the
(semi) hardware RAID of the controller card (As I understand it, there's
an XOR chip onboard to handle most of the calculations, with the rest
being passed on to the main CPU).

Basically what I need to know is - does the theory sound right before I
fork out the cash (£500 is a lot to me!), or is there a better way?

And am I overestimating the capabilities of the raid array? with no
prior experience I'm not sure what to expect.

Thank in advance for any (and I mean any!) input you can give,
--
Meurig
  #2  
Old September 20th 04, 12:30 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"Meurig Freeman" wrote in message
...
As a home user I am in need of a budget solution. As a RAID newbie I'm
in need of a little bit of advice.

The background - I need to get a lot of data onto a lot of drives as
quickly as possible, and it has to be done via 100Mbps ethernet. The
ethernet is my current bottleneck and I intend to overcome it by
connecting three 100Mbps devices via a gigabit switch to a gigabit card
on a server.


That works.

Thereby enabling me to fill three hard disks at once (I hope).


In RAID 5 there's no separate filling. There just one four drive(sized like
3) array that looks like a single physical HD. All data is spread across
all drives. You can partition the array into logical drive letters just
like any other drive but all is still spread across all. Having 3 streams
writing might fill the array faster than one except that RAID 5 writes
slower than it reads.

I foresee the HD being the next limitation, as such I intend to buy a
Promise FastTrak S150 SX4 controller and use a RAID 5 set up with four
WD Caviar 200GB SATA150 8MB 7200rpm drives and so get 600GB of storge
with some protection from the additional drive.

This is going into my desktop machine, a 2.4GHz system running WinXP pro
with a Gigabyte 8INXP motherboard (32bit 33MHz PCI bus). It's not a
dedicated server so I'd like to take some load off the CPU by using the
(semi) hardware RAID of the controller card (As I understand it, there's
an XOR chip onboard to handle most of the calculations, with the rest
being passed on to the main CPU).


"semi"..right. It works.

Basically what I need to know is - does the theory sound right before I
fork out the cash (£500 is a lot to me!), or is there a better way?


It all works and fairly well.

And am I overestimating the capabilities of the raid array?


To do what exactly? It will keep your data reliably if you quickly replace
a failed drive. Always keep a good backup. RAID should never be used to
supplant a good backup scheme.

with no
prior experience I'm not sure what to expect.



What is your overall final goal? Why have you chosen this solution?

Thank in advance for any (and I mean any!) input you can give,
--
Meurig



  #3  
Old September 20th 04, 12:55 AM
Meurig Freeman
external usenet poster
 
Posts: n/a
Default

Ron Reaugh wrote:

"Meurig Freeman" wrote in message
...

As a home user I am in need of a budget solution. As a RAID newbie I'm
in need of a little bit of advice.

The background - I need to get a lot of data onto a lot of drives as
quickly as possible, and it has to be done via 100Mbps ethernet. The
ethernet is my current bottleneck and I intend to overcome it by
connecting three 100Mbps devices via a gigabit switch to a gigabit card
on a server.



That works.


Thereby enabling me to fill three hard disks at once (I hope).



In RAID 5 there's no separate filling. There just one four drive(sized like
3) array that looks like a single physical HD. All data is spread across
all drives. You can partition the array into logical drive letters just
like any other drive but all is still spread across all. Having 3 streams
writing might fill the array faster than one except that RAID 5 writes
slower than it reads.


Don't think I explained this very well sorry. The raid array is to be
the source of the data (the server), the destination is clients on
100Mbps. I basically want to try and saturate the 100MBps connection of
three clients at once using the server with a gigabit connection and a
raid 5 array.



I foresee the HD being the next limitation, as such I intend to buy a
Promise FastTrak S150 SX4 controller and use a RAID 5 set up with four
WD Caviar 200GB SATA150 8MB 7200rpm drives and so get 600GB of storge
with some protection from the additional drive.

This is going into my desktop machine, a 2.4GHz system running WinXP pro
with a Gigabyte 8INXP motherboard (32bit 33MHz PCI bus). It's not a
dedicated server so I'd like to take some load off the CPU by using the
(semi) hardware RAID of the controller card (As I understand it, there's
an XOR chip onboard to handle most of the calculations, with the rest
being passed on to the main CPU).



"semi"..right. It works.


Basically what I need to know is - does the theory sound right before I
fork out the cash (£500 is a lot to me!), or is there a better way?



It all works and fairly well.


And am I overestimating the capabilities of the raid array?



To do what exactly? It will keep your data reliably if you quickly replace
a failed drive. Always keep a good backup. RAID should never be used to
supplant a good backup scheme.


with no
prior experience I'm not sure what to expect.




What is your overall final goal? Why have you chosen this solution?


As I've tried to fill in above, the goal is to try and saturate the
100Mbps connections of the clients (three at once ideally, perhaps even
four?). Is the raid array going to be fast enough? Are there going to
be other bottlenecks, like the pci bus?



Thank in advance for any (and I mean any!) input you can give,
--
Meurig



Thank you for taking the time to try and understand my OP, and thanks
for your help :-)

--
Meurig
  #4  
Old September 20th 04, 01:33 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"Meurig Freeman" wrote in message
...
Ron Reaugh wrote:

"Meurig Freeman" wrote in message
...

As a home user I am in need of a budget solution. As a RAID newbie I'm
in need of a little bit of advice.

The background - I need to get a lot of data onto a lot of drives as
quickly as possible, and it has to be done via 100Mbps ethernet. The
ethernet is my current bottleneck and I intend to overcome it by
connecting three 100Mbps devices via a gigabit switch to a gigabit card
on a server.



That works.


Thereby enabling me to fill three hard disks at once (I hope).



In RAID 5 there's no separate filling. There just one four drive(sized

like
3) array that looks like a single physical HD. All data is spread

across
all drives. You can partition the array into logical drive letters just
like any other drive but all is still spread across all. Having 3

streams
writing might fill the array faster than one except that RAID 5 writes
slower than it reads.


Don't think I explained this very well sorry. The raid array is to be
the source of the data (the server), the destination is clients on
100Mbps. I basically want to try and saturate the 100MBps connection of
three clients at once using the server with a gigabit connection and a
raid 5 array.


That works.

I foresee the HD being the next limitation, as such I intend to buy a
Promise FastTrak S150 SX4 controller and use a RAID 5 set up with four
WD Caviar 200GB SATA150 8MB 7200rpm drives and so get 600GB of storge
with some protection from the additional drive.

This is going into my desktop machine, a 2.4GHz system running WinXP pro
with a Gigabyte 8INXP motherboard (32bit 33MHz PCI bus). It's not a
dedicated server so I'd like to take some load off the CPU by using the
(semi) hardware RAID of the controller card (As I understand it, there's
an XOR chip onboard to handle most of the calculations, with the rest
being passed on to the main CPU).



"semi"..right. It works.


Basically what I need to know is - does the theory sound right before I
fork out the cash (£500 is a lot to me!), or is there a better way?



It all works and fairly well.


And am I overestimating the capabilities of the raid array?



To do what exactly? It will keep your data reliably if you quickly

replace
a failed drive. Always keep a good backup. RAID should never be used

to
supplant a good backup scheme.


with no
prior experience I'm not sure what to expect.




What is your overall final goal? Why have you chosen this solution?


As I've tried to fill in above, the goal is to try and saturate the
100Mbps connections of the clients (three at once ideally, perhaps even
four?). Is the raid array going to be fast enough? Are there going to
be other bottlenecks, like the pci bus?


Saturate the 100BT with what..streaming like video or big file transfers or
small record random I/O database type stuff? Will all the 100BT connections
be getting exactly the same data at the same time?

You might be PCI/mobo limited.


  #5  
Old September 20th 04, 01:55 AM
Meurig Freeman
external usenet poster
 
Posts: n/a
Default

Ron Reaugh wrote:
"Meurig Freeman" wrote in message
...

Ron Reaugh wrote:


"Meurig Freeman" wrote in message
.. .


snip


Don't think I explained this very well sorry. The raid array is to be
the source of the data (the server), the destination is clients on
100Mbps. I basically want to try and saturate the 100MBps connection of
three clients at once using the server with a gigabit connection and a
raid 5 array.



That works.


I foresee the HD being the next limitation, as such I intend to buy a
Promise FastTrak S150 SX4 controller and use a RAID 5 set up with four
WD Caviar 200GB SATA150 8MB 7200rpm drives and so get 600GB of storge
with some protection from the additional drive.

This is going into my desktop machine, a 2.4GHz system running WinXP pro
with a Gigabyte 8INXP motherboard (32bit 33MHz PCI bus). It's not a
dedicated server so I'd like to take some load off the CPU by using the
(semi) hardware RAID of the controller card (As I understand it, there's
an XOR chip onboard to handle most of the calculations, with the rest
being passed on to the main CPU).


snip

What is your overall final goal? Why have you chosen this solution?


As I've tried to fill in above, the goal is to try and saturate the
100Mbps connections of the clients (three at once ideally, perhaps even
four?). Is the raid array going to be fast enough? Are there going to
be other bottlenecks, like the pci bus?



Saturate the 100BT with what..streaming like video or big file transfers or
small record random I/O database type stuff? Will all the 100BT connections
be getting exactly the same data at the same time?

You might be PCI/mobo limited.



Large file transfers (average about 2GB each, totally about 200GB), the
clients won't all be getting the same data at once (it'll all be the
same data, but unfortunately out of sync).

Atm I can only do one at a time (I fill the hard disk on the client,
then swap it for an empty one and start again), it takes me about 6
hours (give or take). More clients isn't a problem, but they are
limited to 100Mbps.

A proprietary file system on the clients means transfers have to be done
via the network. (okay, I'm looking into possible alternatives, but this
is the way I'd like to do it as a 600GB raid 5 array would be nice if I
can justify the cost).

Sorry for keeping things in the abstract, hope it hasn't caused too many
problems.

I calculate the theoretical pci bus speed to be about 125MB/s. With the
theoretical maximum throughput of four 100Mbps connections being 50MB/s
I was hoping the pci bus wasn't going to be a problem.

I understand that theory and 'back of an envelope' mathemaics only goes
so far though, hence my post.

Thanks again for being so patient and helpful,
--
Meurig
  #6  
Old September 20th 04, 02:35 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"Meurig Freeman" wrote in message

Saturate the 100BT with what..streaming like video or big file transfers

or
small record random I/O database type stuff? Will all the 100BT

connections
be getting exactly the same data at the same time?

You might be PCI/mobo limited.



Large file transfers (average about 2GB each, totally about 200GB), the
clients won't all be getting the same data at once (it'll all be the
same data, but unfortunately out of sync).

Atm I can only do one at a time (I fill the hard disk on the client,
then swap it for an empty one and start again), it takes me about 6
hours (give or take). More clients isn't a problem, but they are
limited to 100Mbps.


Explain, what exactly "isn't a problem"?

Let's see: 6 hours for 200GB is
2e11/(6x3600) = 9.3 MB/sec. THAT'S VERY GOOD!

But 6 hours seems oppressive.

A proprietary file system on the clients means transfers have to be done
via the network. (okay, I'm looking into possible alternatives, but this
is the way I'd like to do it as a 600GB raid 5 array would be nice if I
can justify the cost).

Sorry for keeping things in the abstract, hope it hasn't caused too many
problems.

I calculate the theoretical pci bus speed to be about 125MB/s.


Won't ever actually reach that.

With the
theoretical maximum throughput of four 100Mbps connections being 50MB/s


Probably closer to 32MB/sec. but that assumes that the 100BT link on both
ends is capable of full rate streaming...a big assumption and highly OS
driver dependent.

I was hoping the pci bus wasn't going to be a problem.


64MB/sec.aggregate is possible. The gigabit NIC selection might be
important.
The Promise card should be able to deliver 3[4] streams(8MB/sec.) of big
file data simultaneously but I don't know about that card specifically

I understand that theory and 'back of an envelope' mathemaics only goes
so far though, hence my post.

Thanks again for being so patient and helpful,
--
Meurig



  #7  
Old September 20th 04, 03:14 AM
Meurig Freeman
external usenet poster
 
Posts: n/a
Default

Ron Reaugh wrote:

"Meurig Freeman" wrote in message


Saturate the 100BT with what..streaming like video or big file transfers


or

small record random I/O database type stuff? Will all the 100BT


connections

be getting exactly the same data at the same time?

You might be PCI/mobo limited.



Large file transfers (average about 2GB each, totally about 200GB), the
clients won't all be getting the same data at once (it'll all be the
same data, but unfortunately out of sync).

Atm I can only do one at a time (I fill the hard disk on the client,
then swap it for an empty one and start again), it takes me about 6
hours (give or take). More clients isn't a problem, but they are
limited to 100Mbps.



Explain, what exactly "isn't a problem"?


Perhaps I should come clean, the 'clients' are xboxes fitted with larger
HD's, the proprietary file system is fatx. I essentially mod the
xboxes, fill the hard disk with content (can I at least pretend it's
entirely legal?) and sell them on. So I have an xbox for every HD,
normally have a few sat around spare, rigging 3/4 or more up for
transfers isn't a problem.


Let's see: 6 hours for 200GB is
2e11/(6x3600) = 9.3 MB/sec. THAT'S VERY GOOD!

But 6 hours seems oppressive.


Looking at it currently perhaps 8MB/s is closer (it's more like 180GB
that 200, and more like 6.5hrs than 6), it's using what I believe to be
a proprietary extention of the ftp protocol, refered to as 'burst mode'.



A proprietary file system on the clients means transfers have to be done
via the network. (okay, I'm looking into possible alternatives, but this
is the way I'd like to do it as a 600GB raid 5 array would be nice if I
can justify the cost).

Sorry for keeping things in the abstract, hope it hasn't caused too many
problems.

I calculate the theoretical pci bus speed to be about 125MB/s.



Won't ever actually reach that.


I understand that, but figure since the theoretical max for the network
connections won't ever be reached either I was hoping the two would
cancel out?



With the
theoretical maximum throughput of four 100Mbps connections being 50MB/s



Probably closer to 32MB/sec. but that assumes that the 100BT link on both
ends is capable of full rate streaming...a big assumption and highly OS
driver dependent.


It's going at avg. 8300KB/s atm. and that's from a sub £5 100Mbps NIC
via a 100Mbps switch two floors above back to an xbox. I'm hoping a
gigabit switch and gigabit NIC won't introduce bottlenecks here.



I was hoping the pci bus wasn't going to be a problem.



64MB/sec.aggregate is possible. The gigabit NIC selection might be
important.
The Promise card should be able to deliver 3[4] streams(8MB/sec.) of big
file data simultaneously but I don't know about that card specifically


I have an onboard gigabit port, but I also have another Gb NIC I can
test, I will be sure to try out both.

As for the promise card, a review on TomsHardware.com had the following
graph:
http://www.tomshardware.com/storage/...a-raid-14.html
showing the card acheiving transfer rates in excess of 100MB/s though
the testbed had faster HD's and what I believe to be a 66MHz pci bus.

It also had a lot of benchmarks showing results around the 3MB/s range
though. They had something to due with a queue depth, but I didn't
really understand. e.g.
http://www.tomshardware.com/storage/...a-raid-13.html

I'm guessing the first graph is closer to what I am trying to acheive?

You've really helped, thank you ever so much for all your time,
--
Meurig
  #8  
Old September 20th 04, 03:37 AM
Ron Reaugh
external usenet poster
 
Posts: n/a
Default


"Meurig Freeman" wrote in message
...
Ron Reaugh wrote:

"Meurig Freeman" wrote in message


Saturate the 100BT with what..streaming like video or big file

transfers

or

small record random I/O database type stuff? Will all the 100BT


connections

be getting exactly the same data at the same time?

You might be PCI/mobo limited.



Large file transfers (average about 2GB each, totally about 200GB), the
clients won't all be getting the same data at once (it'll all be the
same data, but unfortunately out of sync).

Atm I can only do one at a time (I fill the hard disk on the client,
then swap it for an empty one and start again), it takes me about 6
hours (give or take). More clients isn't a problem, but they are
limited to 100Mbps.



Explain, what exactly "isn't a problem"?


Perhaps I should come clean, the 'clients' are xboxes fitted with larger
HD's, the proprietary file system is fatx. I essentially mod the
xboxes, fill the hard disk with content (can I at least pretend it's
entirely legal?) and sell them on. So I have an xbox for every HD,
normally have a few sat around spare, rigging 3/4 or more up for
transfers isn't a problem.


Build one such XBox HD. Put it in a desktop PC and use a drive bit for bit
clone utility and it'll be vastly faster. Want to do 3 at a time then use 3
inexpensive older PCs. I don't see what RAID 5 and 600GB has to do with
anything.

Let's see: 6 hours for 200GB is
2e11/(6x3600) = 9.3 MB/sec. THAT'S VERY GOOD!

But 6 hours seems oppressive.


Looking at it currently perhaps 8MB/s is closer


8MB/sec. is the workin figure I use for 100BT.

(it's more like 180GB
that 200, and more like 6.5hrs than 6), it's using what I believe to be
a proprietary extention of the ftp protocol, refered to as 'burst mode'.



A proprietary file system on the clients means transfers have to be done
via the network. (okay, I'm looking into possible alternatives, but this
is the way I'd like to do it as a 600GB raid 5 array would be nice if I
can justify the cost).

Sorry for keeping things in the abstract, hope it hasn't caused too many
problems.

I calculate the theoretical pci bus speed to be about 125MB/s.



Won't ever actually reach that.


I understand that, but figure since the theoretical max for the network
connections won't ever be reached either I was hoping the two would
cancel out?


No add.


With the
theoretical maximum throughput of four 100Mbps connections being 50MB/s



Probably closer to 32MB/sec. but that assumes that the 100BT link on

both
ends is capable of full rate streaming...a big assumption and highly OS
driver dependent.


It's going at avg. 8300KB/s atm. and that's from a sub £5 100Mbps NIC
via a 100Mbps switch two floors above back to an xbox. I'm hoping a
gigabit switch and gigabit NIC won't introduce bottlenecks here.



I was hoping the pci bus wasn't going to be a problem.



64MB/sec.aggregate is possible. The gigabit NIC selection might be
important.
The Promise card should be able to deliver 3[4] streams(8MB/sec.) of

big
file data simultaneously but I don't know about that card specifically


I have an onboard gigabit port, but I also have another Gb NIC I can
test, I will be sure to try out both.

As for the promise card, a review on TomsHardware.com had the following
graph:
http://www.tomshardware.com/storage/...a-raid-14.html
showing the card acheiving transfer rates in excess of 100MB/s though
the testbed had faster HD's and what I believe to be a 66MHz pci bus.

It also had a lot of benchmarks showing results around the 3MB/s range
though. They had something to due with a queue depth, but I didn't
really understand. e.g.
http://www.tomshardware.com/storage/...a-raid-13.html

I'm guessing the first graph is closer to what I am trying to acheive?

You've really helped, thank you ever so much for all your time,



  #9  
Old September 20th 04, 12:43 PM
Meurig Freeman
external usenet poster
 
Posts: n/a
Default

Ron Reaugh wrote:
"Meurig Freeman" wrote in message
...

Ron Reaugh wrote:


"Meurig Freeman" wrote in message



Saturate the 100BT with what..streaming like video or big file


transfers

or


small record random I/O database type stuff? Will all the 100BT

connections


be getting exactly the same data at the same time?

You might be PCI/mobo limited.



Large file transfers (average about 2GB each, totally about 200GB), the
clients won't all be getting the same data at once (it'll all be the
same data, but unfortunately out of sync).

Atm I can only do one at a time (I fill the hard disk on the client,
then swap it for an empty one and start again), it takes me about 6
hours (give or take). More clients isn't a problem, but they are
limited to 100Mbps.


Explain, what exactly "isn't a problem"?


Perhaps I should come clean, the 'clients' are xboxes fitted with larger
HD's, the proprietary file system is fatx. I essentially mod the
xboxes, fill the hard disk with content (can I at least pretend it's
entirely legal?) and sell them on. So I have an xbox for every HD,
normally have a few sat around spare, rigging 3/4 or more up for
transfers isn't a problem.



Build one such XBox HD. Put it in a desktop PC and use a drive bit for bit
clone utility and it'll be vastly faster. Want to do 3 at a time then use 3
inexpensive older PCs. I don't see what RAID 5 and 600GB has to do with
anything.


Thanks for the advice, hadn't thought of using such low-level copy. I'm
going to try doing this later today when I get a chance, to give me an
idea of how quickly it can be done. It's not an ideal solution because
the data going to each hard disk does differ slightly, but even with
making the necessary changes it will still likely work out cosiderably
faster.

I'm still interested in the RAID array because I would like more
storage, with some level of data protection. This exercise is mainly
justification for the price tag.


Let's see: 6 hours for 200GB is
2e11/(6x3600) = 9.3 MB/sec. THAT'S VERY GOOD!

But 6 hours seems oppressive.


Looking at it currently perhaps 8MB/s is closer



8MB/sec. is the workin figure I use for 100BT.


(it's more like 180GB
that 200, and more like 6.5hrs than 6), it's using what I believe to be
a proprietary extention of the ftp protocol, refered to as 'burst mode'.



A proprietary file system on the clients means transfers have to be done
via the network. (okay, I'm looking into possible alternatives, but this
is the way I'd like to do it as a 600GB raid 5 array would be nice if I
can justify the cost).

Sorry for keeping things in the abstract, hope it hasn't caused too many
problems.

I calculate the theoretical pci bus speed to be about 125MB/s.


Won't ever actually reach that.


I understand that, but figure since the theoretical max for the network
connections won't ever be reached either I was hoping the two would
cancel out?



No add.


I'm not sure I understand. Say in practice the PCI bus will only cope
with 50% of its maximum theoretical throughput, this would start to be a
problem if the network could support it's maximum throughput. But since
the network connections can only acheive 75% of their maximum throughput
the PCI bus still isn't a problem, right?

Or am I missing something? perhaps network overheads contribute somehow?



With the
theoretical maximum throughput of four 100Mbps connections being 50MB/s


Probably closer to 32MB/sec. but that assumes that the 100BT link on


both

ends is capable of full rate streaming...a big assumption and highly OS
driver dependent.


It's going at avg. 8300KB/s atm. and that's from a sub £5 100Mbps NIC
via a 100Mbps switch two floors above back to an xbox. I'm hoping a
gigabit switch and gigabit NIC won't introduce bottlenecks here.



I was hoping the pci bus wasn't going to be a problem.


64MB/sec.aggregate is possible. The gigabit NIC selection might be
important.
The Promise card should be able to deliver 3[4] streams(8MB/sec.) of


big

file data simultaneously but I don't know about that card specifically


I have an onboard gigabit port, but I also have another Gb NIC I can
test, I will be sure to try out both.

As for the promise card, a review on TomsHardware.com had the following
graph:
http://www.tomshardware.com/storage/...a-raid-14.html
showing the card acheiving transfer rates in excess of 100MB/s though
the testbed had faster HD's and what I believe to be a 66MHz pci bus.

It also had a lot of benchmarks showing results around the 3MB/s range
though. They had something to due with a queue depth, but I didn't
really understand. e.g.
http://www.tomshardware.com/storage/...a-raid-13.html

I'm guessing the first graph is closer to what I am trying to acheive?

You've really helped, thank you ever so much for all your time,





Thanks again,
--
Meurig
(http://xboxmods.meurig.com)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
IDE RAID Ted Dawson Asus Motherboards 29 September 21st 04 03:39 AM
Need help with SATA RAID 1 failure on A7N8X Delux Cameron Asus Motherboards 10 September 6th 04 11:50 PM
Suggestions on TRUE Hardware RAID Motherboard -- EIDE or Serial ATA Ringo Langly Homebuilt PC's 3 August 13th 04 12:15 AM
Asus P4C800 Deluxe ATA SATA and RAID Promise FastTrack 378 Drivers and more. Julian Asus Motherboards 2 August 11th 04 12:43 PM
How Create SATA RAID 1 with current install? Mr Mister Asus Motherboards 8 July 25th 04 10:46 PM


All times are GMT +1. The time now is 04:06 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.