A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

1.485 Gbit/s to and from HDD subsystem



 
 
Thread Tools Display Modes
  #51  
Old December 14th 06, 10:57 AM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
Spoon
external usenet poster
 
Posts: 31
Default 1.485 Gbit/s to and from HDD subsystem

teckytim wrote:

Spoon. This question is in the wrong group. Talk to *actual* storage
professionals that have *actually* used and built storage systems that
meet or exceed these requirements at comp.arch.storage.


Tim,

Thanks for the pointer. I will give it a try.

Regards.
  #52  
Old December 14th 06, 11:05 AM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
Spoon
external usenet poster
 
Posts: 31
Default 1.485 Gbit/s to and from HDD subsystem

willbill wrote:

Spoon wrote:

willbill wrote:

Spoon wrote:

What are you trying to say?
That SATA HDDs cannot reach 150 MB/s while SCSI drives can?
For example, the Barracuda 7200.10 can burst data at 250 MB/s.
And if you were talking about sustained rates, could you point
me to a SCSI HDD that can sustain 150 MB/s.

you might take a look at this review:
http://www.storagereview.com/article...00655LW_1.html


What am I supposed to see? :-)

Even the Cheetah 15K.5 cannot sustain 150 MB/s.

(135 MB/s on outer tracks down to 82 MB/s on inner tracks.)


i figured you'd have the brains to look around the site

their jan.'06 review of the 150GB Raptor
shows 88.3 MB/s outer, down to 60.2 inner

see: http://www.storagereview.com/article...500ADFD_3.html


I don't understand what point you are trying to make.

Can you elaborate?
  #53  
Old December 14th 06, 11:51 PM posted to comp.sys.ibm.pc.hardware.chips
Douglas Bollinger
external usenet poster
 
Posts: 6
Default 1.485 Gbit/s to and from HDD subsystem

On Tue, 12 Dec 2006 15:46:13 +0100, Spoon wrote:

snip
They reach 410 MB/s read and (only) 200 MB/s write.

Holy mother of pearls! Only 200 MB/s with 12 drives?


Here's a small web forum thread with some Linux SW RAID benchmarks:

http://forums.2cpu.com/showthread.php?t=79364

He was hitting 484 MB/s reads & 335 MB/s writes.

--
DOS Air:
All the passengers go out onto the runway, grab hold of the plane, push it
until it gets in the air, hop on, jump off when it hits the ground again.
Then they grab the plane again, push it back into the air, hop on, et
cetera.
  #54  
Old December 15th 06, 03:07 AM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage,comp.arch.storage
willbill
external usenet poster
 
Posts: 103
Default 1.485 Gbit/s to and from HDD subsystem

Spoon wrote:

willbill wrote:

Spoon wrote:

willbill wrote:

Spoon wrote:

What are you trying to say?
That SATA HDDs cannot reach 150 MB/s while SCSI drives can?
For example, the Barracuda 7200.10 can burst data at 250 MB/s.
And if you were talking about sustained rates, could you point
me to a SCSI HDD that can sustain 150 MB/s.


you might take a look at this review:
http://www.storagereview.com/article...00655LW_1.html


What am I supposed to see? :-)

Even the Cheetah 15K.5 cannot sustain 150 MB/s.

(135 MB/s on outer tracks down to 82 MB/s on inner tracks.)



i figured you'd have the brains to look around the site

their jan.'06 review of the 150GB Raptor
shows 88.3 MB/s outer, down to 60.2 inner

see: http://www.storagereview.com/article...500ADFD_3.html



I don't understand what point you are trying to make.

Can you elaborate?



you were the person who was attracted
to Raptor/SATA (presumably cost issues),
*not* me

i wasn't and i said so upfront

is there some disconnect here?

SCSI has had the superior performance
(over IDE and SATA) for almost forever
(10+ years)

i've been upfront with my limited lack of
knowledge on the subject of high end raid

imho, the primary use of usenet n/g's
is ideas

and i have done my limited/honest best
to give ideas in this thread

the fact that you responded as you did to
both myself and techytim is in you favor

the one thing that techytim said that
was worthwhile was doing a post on the
comp.arch.storage n/g

while i don't doubt his input on no raid7
controllers being available, i'd still
google on raid7 raid-7 raid_7 and "raid 7"
and see what turns up, coz it never ceases
to amaze me on how wrong "experts" are

i can say that i've never seen either
a raid2 or a raid7 controller, which
is why i made the comment:
"good luck finding a raid7 controller"

so far raid0 may work for you, although
it seems to still be an open question
if it's write performance will meet
your needs

my one other thought is that if you do find
a raid0 controller with the write performance
that you need, you might give some serious
thought to laying in a couple of extra 300GB
Seagate Cheeta (sp?) drives and then only
allocate the 1st 60/70% to the partition
that you are going to use (allocate the rest
to a 2nd partition and test the speed diff)

to my mind, any high end raid controller should take
the outer rims (fastest) for the initial selection,
but i don't know that for sure (another question
to pose on the comp.arch.storage n/g)

bill
  #55  
Old December 15th 06, 08:35 AM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
Bill Todd
external usenet poster
 
Posts: 162
Default 1.485 Gbit/s to and from HDD subsystem

Spoon wrote:
teckytim wrote:

Spoon. This question is in the wrong group. Talk to *actual* storage
professionals that have *actually* used and built storage systems that
meet or exceed these requirements at comp.arch.storage.


Tim,

Thanks for the pointer. I will give it a try.

Regards.


Well...

1. If the software that you're using is competent, an average desktop
system today can stream data onto or off at least a half-dozen disks at
very close to their theoretical potential, even on their outermost
tracks. Five of today's better 7200 rpm desktop drives will handle your
bandwidth requirement even on their innermost tracks; if you restrict
your usage to middle or outer tracks, four or even three could suffice
(unless you need the extra space anyway).

2. If you're not going to be streaming data for hours on end (i.e., the
disks get to take occasional significant breaks), conventional SATA (or
even plain old PATA) drives will work just fine (though 'near-line
enterprise' versions cost very little more if you'd feel more
comfortable with them). Don't bother with Raptors and don't even think
about high-end FC/SCSI - they're for seek-intensive continuous
operation, and the increase in per-disk streaming bandwidth doesn't
begin to justify the increase in cost.

3. One conceivable gotcha could be recalibration activity: I'm not
sure how completely that's been tamed. It used to be that a disk would
just decide to take a break for a second or more once in a while to
recalibrate (reevaluate its internal sense of where the tracks were),
which tended to disrupt the smoothness of streaming data. Back then
vendors sold special 'audio-visual' disks that purported to avoid this
problem, but I haven't heard anything about them recently. I suspect
that all disks are now more civilized about waiting for an opportune
moment (or that most of the need for recalibration may have disappeared
when the servo information became embedded with the data itself) - but
letting the array's temperature stabilize a bit after start-up before
putting it to use couldn't hurt.

4. If you really can tolerate interruption by the occasional disk
failure, RAID-0 is the way to go. If not, use RAID-10 (which will
maintain your read bandwidth even if you lose a drive, unlike RAID-5).

5. If you use RAID-0, software RAID will work virtually as well as
hardware RAID would (this is almost as true for RAID-10): just make
sure that the disks' write caches are enabled. In the unlikely event
that you wind up using PATA drives each single cable/controller port may
not have sufficient bandwidth to support more than one - in which case
you'll need more then the typical two PATA motherboard connectors,
either via a MB with an additional on-board RAID controller or by using
an add-on card. Unless you'll be doing significant *other* activity
while streaming data it would probably be safe for one of your streaming
disks to share a cable with your system disk (though if that turned out
to be a problem you could run the system and other software off a CD or
USB drive).

6. If you're not actually *processing* the data stream but just writing
it to disk as it comes in and then later reading it back out to
somewhere else, even a modest single-core processor won't break a sweat:
it just acts as a traffic cop, while the motherboard's DMA engines and
the disks do all the real work. Memory bandwidth won't be taxed,
either. (Note that both of these observations might change if you used
software RAID-5.)

7. PCI bandwidth, however, may be a problem. A plain old 32/33 PCI
maxes out at under 132 MB/sec of bandwidth (minus collision overhead) -
so even if the system bridges kept the disk transfers off the PCI (which
would not be the case if you needed to use a PCI card to connect some of
the disks) you couldn't stream the data in, or out, over the PCI (though
with bridges that supported dual Gigabit Ethernet as well as the disk
traffic you could do the job without touching the PCI at all - if
connecting via Ethernet were an option). 64/66 PCI might have enough
headroom to handle the combined interface and disk traffic and PCI-X
certainly should - so you shouldn't need to go to PCI Express unless you
want to.

8. Desktop disks don't take all that much power to run. A typical
contemporary 350W power supply will spin up 4 of them simultaneously
(which is by far the time of heaviest power draw - that's why spin-up
times are staggered in larger systems), unless it's heavily loaded by
some macho gaming processor and graphics card. Once they're spinning,
they take very little power indeed (especially if they're only streaming
data rather than beating their little hearts out doing constant long
seeks and short transfers): cooling won't be a significant problem
(though you do want to keep them comfortable).

- bill
  #56  
Old December 15th 06, 09:21 AM posted to comp.sys.ibm.pc.hardware.chips
Spoon
external usenet poster
 
Posts: 31
Default 1.485 Gbit/s to and from HDD subsystem

Douglas Bollinger wrote:

Spoon wrote:

They reach 410 MB/s read and (only) 200 MB/s write.

Holy mother of pearls! Only 200 MB/s with 12 drives?


Here's a small web forum thread with some Linux SW RAID benchmarks:

http://forums.2cpu.com/showthread.php?t=79364

He was hitting 484 MB/s reads & 335 MB/s writes.


Very interesting indeed. Thanks!

I'll definitely give Linux software RAID a spin.
  #57  
Old December 20th 06, 06:32 AM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
Pepwin
external usenet poster
 
Posts: 6
Default 1.485 Gbit/s to and from HDD subsystem

Wow,
well here is something for the processor that may be of value or not:

AMD 580X CrossFire™ Chipset - Specifications

General

* The worlds first single chip 2x16 PCI-E chipset
* Enhanced support for over-clocking, and PCI Express performance
* Fastest multi-GPU interconnect
* Coupled with SB600 for performance

CPU Interface

* Support for all AMD CPU's: Athlon™ 64, Athlon™ 64 FX,
Athlon™ 64 X2 Dual-Core, and Sempron™ processors
* Support for 64-bit extended operating systems
* Highly overclockable and robust HyperTransport™ interface

PCI Express Interface

* 2 x16 PCI Express lanes to support simultaneous operation of
graphics cards
* Additional 4 PCI-E General Purpose Lanes for peripheral support
* Compliant with the PCI Express 1.0a Specifications

Power Management Features

* Fully supports ACPI states S1, S3, S4, and S5
* Support for AMD Cool'n'Quiet™ technology for crisp and
quiet operation

Optimized Software Support

* Unified driver support on all ATI Radeon PCI Express discrete
graphics products
* Support for Microsoft® Windows® XP, Windows® 2000, and Linux

Universal Connectivity

* A-Link Xpress II i/f to ATI northbridges; providing high
bandwidth for high speed peripherals
* 10 USB 2.0 ports
* SATA Gen 2 PHY support at 3.0Ghz with E-SATA capability
* 4 ports SATA AHCI controller supports NCQ and slumber modes
* ATA 133 controller support up to UDMA mode 6 with 2 drives (disk
or optical)
* TPM 1.1 and 1.2 compliant
* ASF 2.0 support for manageability control
* HPET (high precision event timer), ACPI 3.0, and AHCI support for
Windows Vista
* Power management engine supporting both AMD and Intel platforms
and forward compliant to MS Windows Vista
* UAA (universal audio architecture) support for High-Definition
Audio and MODEM
* PCI v2.3 (up to 6 slots)
* LPC (Low Pin Count), SPI (new flash bus), and SM (System
Management) bus management and arbitrations
* "Legacy" PC compatible functions, RTC (Real Time Clock),
interrupt controller and DMA controllers


teckytim wrote:
Spoon wrote:

What are you trying to say?
That SATA HDDs cannot reach 150 MB/s while SCSI drives can?


you might take a look at this review:
http://www.storagereview.com/article...00655LW_1.html


What am I supposed to see? :-)

Even the Cheetah 15K.5 cannot sustain 150 MB/s.

(135 MB/s on outer tracks down to 82 MB/s on inner tracks.)



i figured you'd have the brains to
look around the site

their jan.'06 review of the 150GB Raptor
shows 88.3 MB/s outer, down to 60.2 inner

see: http://www.storagereview.com/article...500ADFD_3.html


Hmm. Getting nasty yet you cite a slower drive. & after all this
posting you still haven't provided a *solution* to the sustained 186
MB/s requirement.


i'd think raid_7 would be worth looking at:
http://www.storagereview.com/guide20...els/comp..html


Absolute silliness. There's nothing to look at. RAID 7 was a bunch of
unsafe mumbo jumbo *only* available from the now defunct Storage
Computer Corporation.

What's wrong with RAID-0?



well i was responding to your raid6 comment,
which has the worst write performance

raid0 has no fault tolerance


Doesn't matter if he doesn't need it.


Spoon. This question is in the wrong group. Talk to *actual* storage
professionals that have *actually* used and built storage systems that
meet or exceed these requirements at comp.arch.storage.


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 02:50 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.