A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Constructing a disk system with RAM read speed and RAID 1 reliability



 
 
Thread Tools Display Modes
  #1  
Old August 31st 08, 08:15 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Peter Olcott
external usenet poster
 
Posts: 86
Default Constructing a disk system with RAM read speed and RAID 1 reliability

I have an idea to construct a disk array that can provide
2400 MB / second read performance and provide the
reliability of RAID 1 mirroring. All that I need to know is
how to directly hook up 24 drives to a single workstation.

I have a need for a system that can read 1.6 GB files
directly into memory in one second or less. What I need is a
sort of virtual memory system with a variable page size of
at least 500MB. I only need to be able to read these files
quickly, writing them can be at single disk write speed.

If I can connect 24 drives up to a single workstation, I can
read 1/24th of the file sized increments into a single
memory buffer at 1/24th of the file sized offsets. This
would only require a single seek per drive. If these drives
each provide 100 MB per second sustained read performance,
then the drives can read at about the maximum speed that RAM
can be written to.

The missing piece of this plan is knowing the best hardware
combination to use, and whether or not any existing hardware
combination will meet these requirements.


  #2  
Old August 31st 08, 09:22 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
david
external usenet poster
 
Posts: 231
Default Constructing a disk system with RAM read speed and RAID 1reliability

On Sun, 31 Aug 2008 14:15:39 -0500, Peter Olcott rearranged some electrons
to say:

I have an idea to construct a disk array that can provide 2400 MB /
second read performance and provide the reliability of RAID 1 mirroring.
All that I need to know is how to directly hook up 24 drives to a single
workstation.

I have a need for a system that can read 1.6 GB files directly into
memory in one second or less. What I need is a sort of virtual memory
system with a variable page size of at least 500MB. I only need to be
able to read these files quickly, writing them can be at single disk
write speed.

If I can connect 24 drives up to a single workstation, I can read 1/24th
of the file sized increments into a single memory buffer at 1/24th of
the file sized offsets. This would only require a single seek per drive.
If these drives each provide 100 MB per second sustained read
performance, then the drives can read at about the maximum speed that
RAM can be written to.

The missing piece of this plan is knowing the best hardware combination
to use, and whether or not any existing hardware combination will meet
these requirements.


Have you considered solid-state disks?
  #3  
Old August 31st 08, 09:54 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Peter Olcott
external usenet poster
 
Posts: 86
Default Constructing a disk system with RAM read speed and RAID 1 reliability


"david" wrote in message
...
On Sun, 31 Aug 2008 14:15:39 -0500, Peter Olcott
rearranged some electrons
to say:

I have an idea to construct a disk array that can provide
2400 MB /
second read performance and provide the reliability of
RAID 1 mirroring.
All that I need to know is how to directly hook up 24
drives to a single
workstation.

I have a need for a system that can read 1.6 GB files
directly into
memory in one second or less. What I need is a sort of
virtual memory
system with a variable page size of at least 500MB. I
only need to be
able to read these files quickly, writing them can be at
single disk
write speed.

If I can connect 24 drives up to a single workstation, I
can read 1/24th
of the file sized increments into a single memory buffer
at 1/24th of
the file sized offsets. This would only require a single
seek per drive.
If these drives each provide 100 MB per second sustained
read
performance, then the drives can read at about the
maximum speed that
RAM can be written to.

The missing piece of this plan is knowing the best
hardware combination
to use, and whether or not any existing hardware
combination will meet
these requirements.


Have you considered solid-state disks?


I am guessing that my solution (at least for read only) will
beat their performance at a tiny fraction of their cost.
http://www.violin-memory.com/assets/techbrief_gen1.pdf



  #4  
Old August 31st 08, 11:17 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Constructing a disk system with RAM read speed and RAID 1 reliability

Peter Olcott wrote:
"david" wrote in message
...
On Sun, 31 Aug 2008 14:15:39 -0500, Peter Olcott
rearranged some electrons
to say:

I have an idea to construct a disk array that can provide
2400 MB /
second read performance and provide the reliability of
RAID 1 mirroring.
All that I need to know is how to directly hook up 24
drives to a single
workstation.

I have a need for a system that can read 1.6 GB files
directly into
memory in one second or less. What I need is a sort of
virtual memory
system with a variable page size of at least 500MB. I
only need to be
able to read these files quickly, writing them can be at
single disk
write speed.

If I can connect 24 drives up to a single workstation, I
can read 1/24th
of the file sized increments into a single memory buffer
at 1/24th of
the file sized offsets. This would only require a single
seek per drive.
If these drives each provide 100 MB per second sustained
read
performance, then the drives can read at about the
maximum speed that
RAM can be written to.

The missing piece of this plan is knowing the best
hardware combination
to use, and whether or not any existing hardware
combination will meet
these requirements.

Have you considered solid-state disks?


I am guessing that my solution (at least for read only) will
beat their performance at a tiny fraction of their cost.
http://www.violin-memory.com/assets/techbrief_gen1.pdf


The best commodity device (in terms of pricing), is the Gigabyte
iRAM with SATA interface.

http://www.anandtech.com/storage/showdoc.aspx?i=2480

It was available in several versions, but as far as I know,
the one that gets power (but not digital signals) from a PCI
slot, is the one most often "seen in the wild".

http://en.wikipedia.org/wiki/I-RAM

http://www.dailytech.com/article.aspx?newsid=7563

An experimenter on 2cpu.com tested the PCI powered version on an Areca
card, and found that proper RAID cards (like an Areca), didn't like the
iRAM because it doesn't emulate enough of a SATA disk.
The iRAM does work with things like the RAID interface on a
Southbridge chip. That limits the number of iRAMs that could
be used, to six or so. The iRAM interface is SATA 150MB/sec,
and the actual transfer rate is lower than that.

(A sample thread. Areca 1160 has a status of POS or "piece of ****",
when used with the iRAM.)

http://forums.2cpu.com/archive/index.php/t-77526.html

Otherwise, you'd be getting RAM based storage, for the price
of DDR RAM. The last 1GB DDR I bought (good stuff), cost
$35 and you can get them for less than that.

This is another example of a product based on RAM. I think
the prices are without RAM installed, but I could be wrong.

http://www.hyperossystems.co.uk/0704...rosHDIIproduct

Dan answers the question, from Apr 2008.

http://www.dansdata.com/askdan00025.htm

Paul
  #5  
Old September 3rd 08, 04:11 AM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Peter Olcott
external usenet poster
 
Posts: 86
Default Constructing a disk system with RAM read speed and RAID 1 reliability


"Paul" wrote in message
...
Peter Olcott wrote:
"david" wrote in message
...
On Sun, 31 Aug 2008 14:15:39 -0500, Peter Olcott
rearranged some electrons
to say:

I have an idea to construct a disk array that can
provide 2400 MB /
second read performance and provide the reliability of
RAID 1 mirroring.
All that I need to know is how to directly hook up 24
drives to a single
workstation.

I have a need for a system that can read 1.6 GB files
directly into
memory in one second or less. What I need is a sort of
virtual memory
system with a variable page size of at least 500MB. I
only need to be
able to read these files quickly, writing them can be
at single disk
write speed.

If I can connect 24 drives up to a single workstation,
I can read 1/24th
of the file sized increments into a single memory
buffer at 1/24th of
the file sized offsets. This would only require a
single seek per drive.
If these drives each provide 100 MB per second
sustained read
performance, then the drives can read at about the
maximum speed that
RAM can be written to.

The missing piece of this plan is knowing the best
hardware combination
to use, and whether or not any existing hardware
combination will meet
these requirements.
Have you considered solid-state disks?


I am guessing that my solution (at least for read only)
will beat their performance at a tiny fraction of their
cost.
http://www.violin-memory.com/assets/techbrief_gen1.pdf


The best commodity device (in terms of pricing), is the
Gigabyte
iRAM with SATA interface.

http://www.anandtech.com/storage/showdoc.aspx?i=2480

It was available in several versions, but as far as I
know,
the one that gets power (but not digital signals) from a
PCI
slot, is the one most often "seen in the wild".

http://en.wikipedia.org/wiki/I-RAM

http://www.dailytech.com/article.aspx?newsid=7563

An experimenter on 2cpu.com tested the PCI powered version
on an Areca
card, and found that proper RAID cards (like an Areca),
didn't like the
iRAM because it doesn't emulate enough of a SATA disk.
The iRAM does work with things like the RAID interface on
a
Southbridge chip. That limits the number of iRAMs that
could
be used, to six or so. The iRAM interface is SATA
150MB/sec,
and the actual transfer rate is lower than that.

(A sample thread. Areca 1160 has a status of POS or "piece
of ****",
when used with the iRAM.)

http://forums.2cpu.com/archive/index.php/t-77526.html

Otherwise, you'd be getting RAM based storage, for the
price
of DDR RAM. The last 1GB DDR I bought (good stuff), cost
$35 and you can get them for less than that.

This is another example of a product based on RAM. I think
the prices are without RAM installed, but I could be
wrong.

http://www.hyperossystems.co.uk/0704...rosHDIIproduct

Dan answers the question, from Apr 2008.

http://www.dansdata.com/askdan00025.htm

Paul


I asked Intel if they have any boards that will meet my
requirements.


  #6  
Old September 3rd 08, 11:14 AM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Constructing a disk system with RAM read speed and RAID 1 reliability

Peter Olcott wrote:


I asked Intel if they have any boards that will meet my
requirements.



Do you mean a motherboard like the Skulltrail ?

http://www.intel.com/products/deskto...S-overview.htm

http://downloadcenter.intel.com/Prod...ProductID=2864

You can also browse through here.

http://www.intel.com/products/mother...dr+prod_boards

*******
There is another possibility here, only this uses an Nvidia chipset.

MSI P7N Diamond. Four large PCI Express slots. Hard to find a
block diagram. Claims 16,16,16,8 for lane wiring. The first two
16 are done with Nforce200 switch. Leaving the 16,8 to be done
by the Southbridge, as I don't see any other chips for the
job. It is hard for me to believe there are 24 lanes on one chip,
which is why I'd prefer to find a block diagram.

http://www.newegg.com/Product/Produc...82E16813130158

The basic premise behind the 780i is here. Even though the diagram
is labeled 780i for both chips, it is actually 780i and 570i.

http://www.anandtech.com/showdoc.aspx?i=3180&p=2

The 570 is listed as 16,8 here. So it does look like the MSI board
would give you at least x8 performance on each of four slots, but
in an ATX form factor 12"x9.6" motherboard.

http://www.nvidia.com/page/nforce5_specs_amd.html

More details on the board here. Check the CPU support chart, before
buying a processor.

http://global.msi.com.tw/index.php?f...2&maincat_no=1

Paul
  #7  
Old September 4th 08, 02:45 AM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Peter Olcott
external usenet poster
 
Posts: 86
Default Constructing a disk system with RAM read speed and RAID 1 reliability

The key question is whether or not any of these alternatives
is feasible. It would seem that the key aspect of this key
question would be whether or not any of these boards can
provide enough simultaneous bandwidth from their expansion
slots. I really need at least 400 MB per second
simultaneously from each of six slots. That should
(hopefully) provide my required 1600 MB per second, even
from the slow part of the drive.

"Paul" wrote in message
...
Peter Olcott wrote:


I asked Intel if they have any boards that will meet my
requirements.


Do you mean a motherboard like the Skulltrail ?

http://www.intel.com/products/deskto...S-overview.htm

http://downloadcenter.intel.com/Prod...ProductID=2864

You can also browse through here.

http://www.intel.com/products/mother...dr+prod_boards

*******
There is another possibility here, only this uses an
Nvidia chipset.

MSI P7N Diamond. Four large PCI Express slots. Hard to
find a
block diagram. Claims 16,16,16,8 for lane wiring. The
first two
16 are done with Nforce200 switch. Leaving the 16,8 to be
done
by the Southbridge, as I don't see any other chips for the
job. It is hard for me to believe there are 24 lanes on
one chip,
which is why I'd prefer to find a block diagram.

http://www.newegg.com/Product/Produc...82E16813130158

The basic premise behind the 780i is here. Even though the
diagram
is labeled 780i for both chips, it is actually 780i and
570i.

http://www.anandtech.com/showdoc.aspx?i=3180&p=2

The 570 is listed as 16,8 here. So it does look like the
MSI board
would give you at least x8 performance on each of four
slots, but
in an ATX form factor 12"x9.6" motherboard.

http://www.nvidia.com/page/nforce5_specs_amd.html

More details on the board here. Check the CPU support
chart, before
buying a processor.

http://global.msi.com.tw/index.php?f...2&maincat_no=1

Paul



  #8  
Old September 4th 08, 05:30 AM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Constructing a disk system with RAM read speed and RAID 1 reliability

Peter Olcott wrote:
The key question is whether or not any of these alternatives
is feasible. It would seem that the key aspect of this key
question would be whether or not any of these boards can
provide enough simultaneous bandwidth from their expansion
slots. I really need at least 400 MB per second
simultaneously from each of six slots. That should
(hopefully) provide my required 1600 MB per second, even
from the slow part of the drive.


P7N Diamond

+------------------------------- x1
| +---------------------------- x1
| |
+----------+ ??? +-----------+
12.8GB/sec ___/ DDR2-800 ---| 780i |-------| Nforce200 |---- x16 3.6Gb/sec
Two sticks \ DDR2-800 ---| | | Switch |---- x16 3.6GB/sec
+----------+ +-----------+
|
| Hypertransport
| Likely 4GB/sec, a *guess*
| Enough for x16 to spread around
|
+-----------+
| 570i |----------------------- x16 2GB/sec
| |----------------------- x8 2GB/sec
| |------- (PCI)
| |------- (SATA)
| |------- four x1, for onboard usage.
+-----------+

First of all, the output of Nforce200 is (2) x16 PCI Express revision 2.0,
which would be a total of 16GB/sec. The input to the Nforce200 cannot
sustain that. And in any case, your storage cards are going to be
running with revision 1.0 speeds, so when using storage cards, the
max bandwidth you could pull from the two slots, would be 8GB/sec total.

It could be that the input bus to Nforce200, is 16 lanes at 4.5GT/sec.
A "normal" PCI Express lane runs at 2.5GT/sec. So there is 4GB/sec * (4.5/2.5)
or 7.2GB/sec feeding into the Nforce200. That means you could have 3.6GB/sec
on each of the "x16" outputs. This is still plenty, with respect to your
requirement of 400MB/sec from each.

The Hypertransport leading to the 570i, could be a 4GB/sec one. This
is a guess based on the fact that the chipset is advertised as a
"3x16" platform. So the x8 and x16 likely share 4GB/sec of bandwidth,
making two solid x8 slots in practice. That would be 2GB/sec per slot.
Activities from other 570i interfaces, would cut into the bandwidth
slightly, such as a burst from the SATA ports. Maybe if you set up a
SATA four drive RAID0, a burst from that would provide the most
competition with the other slots. That still leaves enough bandwidth
to have more than 400MB/sec on the PCI Express slots.

So while there is some detail missing in the diagram, I'm not overly
concerned about the available bandwidth.

The memory supports up to DDR2-1200. You might be using DDR2-800
in there. That would be 6.4GB/sec per memory DIMM. Two DIMMs
operating in dual channel gives 12.8GB/sec, which is just enough
to match the 3x16 bandwidth. And memory does not actually
sustain that kind of bandwidth forever. But again, compared to your
total 1.6GB/sec requirement, there is likely more than enough
capacity in the memory subsystem. Your cards might use 25% of the
practical bandwidth.

If you can find a better diagram than this one, then that
would help fill in the details.

http://images.anandtech.com/reviews/...-block_lrg.png

Paul
  #9  
Old September 4th 08, 12:42 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
PeteOlcott
external usenet poster
 
Posts: 11
Default Constructing a disk system with RAM read speed and RAID 1reliability

On Sep 3, 11:30*pm, Paul wrote:
Peter Olcott wrote:
The key question is whether or not any of these alternatives
is feasible. It would seem that the key aspect of this key
question would be whether or not any of these boards can
provide enough simultaneous bandwidth from their expansion
slots. I really need at least 400 MB per second
simultaneously from each of six slots. That should
(hopefully) provide my required 1600 MB per second, even
from the slow part of the drive.


* * * * * * * * * * * * * * * P7N Diamond

* * * * * * * * * * * * * * * * * +------------------------------- x1
* * * * * * * * * * * * * * * * * | *+---------------------------- x1
* * * * * * * * * * * * * * * * * | *|
* * * * * * * * * * * * * * * +----------+ *??? *+-----------+
12.8GB/sec *___/ DDR2-800 ---| * 780i * |-------| Nforce200 |---- x16 *3.6Gb/sec
Two sticks * * \ DDR2-800 ---| * * * * *| * * * | *Switch * |---- x16 *3.6GB/sec
* * * * * * * * * * * * * * * +----------+ * * * +-----------+
* * * * * * * * * * * * * * * * * *|
* * * * * * * * * * * * * * * * * *| Hypertransport
* * * * * * * * * * * * * * * * * *| Likely 4GB/sec, a *guess*
* * * * * * * * * * * * * * * * * *| Enough for x16 to spread around
* * * * * * * * * * * * * * * * * *|
* * * * * * * * * * * * * * * +-----------+
* * * * * * * * * * * * * * * | * 570i * *|----------------------- x16 *2GB/sec
* * * * * * * * * * * * * * * | * * * * * |----------------------- x8 * 2GB/sec
* * * * * * * * * * * * * * * | * * * * * |------- (PCI)
* * * * * * * * * * * * * * * | * * * * * |------- (SATA)
* * * * * * * * * * * * * * * | * * * * * |------- four x1, for onboard usage.
* * * * * * * * * * * * * * * +-----------+

First of all, the output of Nforce200 is (2) x16 PCI Express revision 2.0,
which would be a total of 16GB/sec. The input to the Nforce200 cannot
sustain that. And in any case, your storage cards are going to be
running with revision 1.0 speeds, so when using storage cards, the
max bandwidth you could pull from the two slots, would be 8GB/sec total.

It could be that the input bus to Nforce200, is 16 lanes at 4.5GT/sec.
A "normal" PCI Express lane runs at 2.5GT/sec. So there is 4GB/sec * (4.5/2.5)
or 7.2GB/sec feeding into the Nforce200. That means you could have 3.6GB/sec
on each of the "x16" outputs. This is still plenty, with respect to your
requirement of 400MB/sec from each.

The Hypertransport leading to the 570i, could be a 4GB/sec one. This
is a guess based on the fact that the chipset is advertised as a
"3x16" platform. So the x8 and x16 likely share 4GB/sec of bandwidth,
making two solid x8 slots in practice. That would be 2GB/sec per slot.
Activities from other 570i interfaces, would cut into the bandwidth
slightly, such as a burst from the SATA ports. Maybe if you set up a
SATA four drive RAID0, a burst from that would provide the most
competition with the other slots. That still leaves enough bandwidth
to have more than 400MB/sec on the PCI Express slots.

So while there is some detail missing in the diagram, I'm not overly
concerned about the available bandwidth.

The memory supports up to DDR2-1200. You might be using DDR2-800
in there. That would be 6.4GB/sec per memory DIMM. Two DIMMs
operating in dual channel gives 12.8GB/sec, which is just enough
to match the 3x16 bandwidth. And memory does not actually
sustain that kind of bandwidth forever. But again, compared to your
total 1.6GB/sec requirement, there is likely more than enough
capacity in the memory subsystem. Your cards might use 25% of the
practical bandwidth.

If you can find a better diagram than this one, then that
would help fill in the details.

http://images.anandtech.com/reviews/...rce-780i/780i-...

* * Paul


So six slots could simultaneously provide at least 400 MB per second?
If the answer is yes, then the next question would be:
Can hard drive controller cards provide at least 400 MB per second:
100 MB per second each from simultaneously reading four different
drives?
  #10  
Old September 4th 08, 10:42 PM posted to alt.comp.hardware,alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Constructing a disk system with RAM read speed and RAID 1 reliability

PeteOlcott wrote:
On Sep 3, 11:30 pm, Paul wrote:
Peter Olcott wrote:
The key question is whether or not any of these alternatives
is feasible. It would seem that the key aspect of this key
question would be whether or not any of these boards can
provide enough simultaneous bandwidth from their expansion
slots. I really need at least 400 MB per second
simultaneously from each of six slots. That should
(hopefully) provide my required 1600 MB per second, even
from the slow part of the drive.

P7N Diamond

+------------------------------- x1
| +---------------------------- x1
| |
+----------+ ??? +-----------+
12.8GB/sec ___/ DDR2-800 ---| 780i |-------| Nforce200 |---- x16 3.6Gb/sec
Two sticks \ DDR2-800 ---| | | Switch |---- x16 3.6GB/sec
+----------+ +-----------+
|
| Hypertransport
| Likely 4GB/sec, a *guess*
| Enough for x16 to spread around
|
+-----------+
| 570i |----------------------- x16 2GB/sec
| |----------------------- x8 2GB/sec
| |------- (PCI)
| |------- (SATA)
| |------- four x1, for onboard usage.
+-----------+

First of all, the output of Nforce200 is (2) x16 PCI Express revision 2.0,
which would be a total of 16GB/sec. The input to the Nforce200 cannot
sustain that. And in any case, your storage cards are going to be
running with revision 1.0 speeds, so when using storage cards, the
max bandwidth you could pull from the two slots, would be 8GB/sec total.

It could be that the input bus to Nforce200, is 16 lanes at 4.5GT/sec.
A "normal" PCI Express lane runs at 2.5GT/sec. So there is 4GB/sec * (4.5/2.5)
or 7.2GB/sec feeding into the Nforce200. That means you could have 3.6GB/sec
on each of the "x16" outputs. This is still plenty, with respect to your
requirement of 400MB/sec from each.

The Hypertransport leading to the 570i, could be a 4GB/sec one. This
is a guess based on the fact that the chipset is advertised as a
"3x16" platform. So the x8 and x16 likely share 4GB/sec of bandwidth,
making two solid x8 slots in practice. That would be 2GB/sec per slot.
Activities from other 570i interfaces, would cut into the bandwidth
slightly, such as a burst from the SATA ports. Maybe if you set up a
SATA four drive RAID0, a burst from that would provide the most
competition with the other slots. That still leaves enough bandwidth
to have more than 400MB/sec on the PCI Express slots.

So while there is some detail missing in the diagram, I'm not overly
concerned about the available bandwidth.

The memory supports up to DDR2-1200. You might be using DDR2-800
in there. That would be 6.4GB/sec per memory DIMM. Two DIMMs
operating in dual channel gives 12.8GB/sec, which is just enough
to match the 3x16 bandwidth. And memory does not actually
sustain that kind of bandwidth forever. But again, compared to your
total 1.6GB/sec requirement, there is likely more than enough
capacity in the memory subsystem. Your cards might use 25% of the
practical bandwidth.

If you can find a better diagram than this one, then that
would help fill in the details.

http://images.anandtech.com/reviews/...rce-780i/780i-...

Paul


So six slots could simultaneously provide at least 400 MB per second?
If the answer is yes, then the next question would be:
Can hard drive controller cards provide at least 400 MB per second:
100 MB per second each from simultaneously reading four different
drives?


The board has four worthwhile slots. The two PCI Express x1 slots aren't
going to be as capable as a x8 or x16 slot. (About 200MB/sec each.)

The answer about controllers, depends on what software is
added, between the controllers and the OS. One of my assumptions
was, that *perhaps* you could use the Tomshardware Windows RAID hack
to combine the bandwidth of 16 disks at 100MB/sec each. I selected
a non-RAID card in that case, so 16 separate disks are presented
to the OS, assuming that the Tomshardware RAID hack would allow
their bandwidth to be combined.

(User sees combined bandwidth 1600MB/sec)
|
Tomshardware_RAID_Hack_For_WinXP
| | | |
------- ------- ------- ------- Four separate cards
| | | | | | | | | | | | | | | | Sixteen disks

If you use Areca cards, we know already from seeing manufacturer data,
that they can produce 800MB/sec limited by their IOP. But with the
Areca cards, you'd need another layer of software to combine
the bandwidth of two cards.

(User sees combined bandwidth 1600MB/sec)
|
(Need a way to RAID0 these two arrays ???) ------ what product does this ?
| |
| | (800MB/sec each)
--------------- --------------- Two separate cards
| | | | | | | | | | | | | | | | Sixteen disks (Velociraptor)

The Areca solution means you need fewer slots, or potentially
you could get more bandwidth etc., but it also means
identifying the software that allows the output of
the Areca disks to be combined. If you're writing
your own software, then you can do that part
yourself (assuming non-blocking I/O in Windows,
so two program threads could read to memory
buffers simultaneously). Alternately, the
software might be something commercial that
allows RAID0 combination of separate arrays.

In the second figure, maybe if the arrays appear
as volumes to the OS, you could use the Tomshardware
hack to combine them ? Perhaps you could test this
concept, using only four disks to start and a couple
of onboard controllers on an ordinary motherboard, to
see if the "striped" option here, would allow two arrays
to be combined.

http://www.tomshardware.com/reviews/...pen,925-3.html

(User sees combined bandwidth)
|
Tomshardware_RAID_Hack_For_WinXP ("striped")
| |
| |
--- --- Two onboard controllers, RAID0 each
| | | | Four disks

What you'd do for the experiment, is set up each array of
two disks individually. Use HDTach or HDTune to benchmark
each array. Then add in the Tomshardware hack, combining
the two arrays by using Windows to "stripe" the volumes.
Then run HDTach or HDTune on the resulting virtual array.
So you should be able to do a partial proof of concept
with simple ingredients. What cannot be known in
advance, is the degree to which it scales and everything
works to deliver more than 1000MB/sec when the
real hardware config is set up.

So whatever you do, this is still going to be an
expensive experiment -- unless you can find an article
where someone has tested a similar concept, you won't
know for sure about the scaling, or whether the thing
runs out of steam at such high bandwidths (CPU limit).
For example, any time that software has to do memory to
memory copies of data, that just kills performance, so
if any part of the software is doing that, it will
crush the performance. The small experiment with the
four disks above, may not be able to show you that
limitation.

That is why using a single controller, capable of doing
more than 800MB/sec, is more attractive. With that
working for you, you're more likely to see a benchmark,
before buying equipment.

Paul
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Constructing a disk system with RAM read speed and RAID 1 reliability Peter Olcott General 10 September 4th 08 11:00 PM
Hi i want to buy an hd and i cant find any new on the read and write speed on a particlar hard disk dirve like western digital DH DDC Asus Motherboards 15 September 25th 05 09:45 PM
Disk read speed benchmark wanted Mark Fineman Storage (alternative) 1 April 27th 04 11:28 PM
Disk read speed benchmark wanted Folkert Rienstra Storage (alternative) 1 April 1st 04 04:48 AM
to read data from RAID disk from another computer. Zhang Weiwu General 3 February 19th 04 12:24 PM


All times are GMT +1. The time now is 06:11 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.