A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Intel or PowerPC for RAID controller



 
 
Thread Tools Display Modes
  #11  
Old October 28th 03, 08:34 PM
Zak
external usenet poster
 
Posts: n/a
Default

albatros66 wrote:

* Do you think, that internal bus with frequency e.g. 33MHz (or 66MHz)
is OK? The newer chips use 133MHz bus (or more ). Note that with
enclosure containing 16 SATA drives it becomes feasible to get trasfer
rates upt to 300Mbytes/s or more (I know about boxes with RAID5 rates:
200MBytes/s write and 300MBytes/s read)
* newer chipsets have the appropriate asic on board which works much
faster
* newer chipsets suport more cache memory and aslo offer cache memory
battery backup ...


Well, at least one vendor offers battery backup (which you want) but
cannot deliver it.

Then, the device I tried gave a satisfactory (for my purpose)
performance of about 70 megabytes/sec over SCSI, in RAID 5. Nothing to
write home about. The drives could handle this at 33 MB/sec interface
rate, but I think the manufacturer claims 133 per drive. Useless, I'd guess.

It is the Axus Brownie.


Thomas

  #13  
Old October 30th 03, 05:07 AM
Scott
external usenet poster
 
Posts: n/a
Default

On Tue, 28 Oct 2003 14:23:18 -0800, Malcolm Weir
wrote:

So what? I'd certainly *hope* that the embedded CPU isn't touching
the data at all...

Actually most of the systems use the CPU for parity calcs

* newer chipsets have the appropriate asic on board which works much
faster


What "appropriate asic"? Works much faster than *what*?


The LSI controller (also used by StorageTek's Bladestore ATA offering)
uses a dedicated ASIC for this, freeing up the CPU to only do traffic
control and config.

Scott

  #14  
Old October 30th 03, 08:10 AM
Mark Smith
external usenet poster
 
Posts: n/a
Default


"albatros66" wrote in message
m...

* Cost are not much different: varying from 12000 USD to 18000USD for
4TB subsystem ...

* familiarity of developers: I'm not developer. I'm "end user" ( not a
user at gray end 8-}). I don't need to care about

* features - most of the subsystem have almoust the same features -
the lists of features are long or very long: ten or more RAID modes,
flashilg lamps, SNMP, hot everything etc etc ...

Let's take your pick again: which SCSI/FIBRE-to-IDE/SATA array is best
from any point of view ??? Is anybody brave enough to answer the
question??? I have to answer to my boss because whe are just buying
few such devices (some 20TB) ...


Can you give any info on what applications you are going to be using the
storage for ? It's usually a big factor in choosing the right system.
I'm guessing you are looking at something to maybe do archiving if you're
looking at IDE based storage ?

What do you mean by "familiarity of developers" ?

Good to see you're after "flashing lights" as a buying factor ;-) Not
enough manufacturers are aware of this ... though Ciprico had a great
"Meg-O-Meter" on theirs :-)

One company I have a lot of respect for is DigiData
(http://www.digidata.com) they're not to huge so they have the time and
inclination to talk to you and help sort out any questions/problems you may
have and the RAID controllers they've made in the past have always been
horribly quick.

Regards

Mark


  #15  
Old October 30th 03, 10:02 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Wed, 29 Oct 2003 20:07:47 -0800, Scott wrote:

On Tue, 28 Oct 2003 14:23:18 -0800, Malcolm Weir
wrote:

So what? I'd certainly *hope* that the embedded CPU isn't touching
the data at all...

Actually most of the systems use the CPU for parity calcs


*delicate shudder*

We had a soft-programmable DMA-engine FPGA doing this in 1994, maybe
95. It had a simple "language" which we encoded as the least
significant bits of an address to process. The commands we

LOAD
XOR in data
SAVE
HALT/INTERRUPT

One just pointed the thing at a list of memory addresses and let it
get on with it.

* newer chipsets have the appropriate asic on board which works much
faster


What "appropriate asic"? Works much faster than *what*?


The LSI controller (also used by StorageTek's Bladestore ATA offering)
uses a dedicated ASIC for this, freeing up the CPU to only do traffic
control and config.


As I'd expect! Although an ASIC is a refinement (over the FPGA).

A problem with (oddly enough) more advanced processors doing XOR work
is that you have to flush the processor's data cache and pipeline in
order to permit the IO processor (the SCSI controller) to access the
updated data...

(The standard multiprocessor cache coherency problem)

Scott


Malc.
  #16  
Old November 3rd 03, 12:26 AM
Benjamin Goldsteen
external usenet poster
 
Posts: n/a
Default

(albatros66) wrote in message

I assume you are talking about some sort of IDE/ATA-SCSI/FC RAID.


Are you a prophet or so ? Indeed I was thinking about such subsystems
but I can say, that also well know vendors of SCSI-to-SCSI or FC-to-FC
arrays relies on these chipsets ...


I suppose that's true but I don't see these kinds of dicussion in
high-quality SCSI/SCSI and FC/FC products. In those market, I would
just recommend getting a Xyratex or LSI unit and trust the engineers
to deliver a quality product.

In summary, the key issues are quality of hardware and software. If
you can find two boxes that don't loose your data, then you might
worry about performance.

I agree with all above, but ...

* Do you think, that internal bus with frequency e.g. 33MHz (or 66MHz)
is OK? The newer chips use 133MHz bus (or more ). Note that with
enclosure containing 16 SATA drives it becomes feasible to get trasfer
rates upt to 300Mbytes/s or more (I know about boxes with RAID5 rates:
200MBytes/s write and 300MBytes/s read)


If the units are achieving 300MB/sec then I suspect the bus speed (I
assume you are referring to drive channel buses) is not your
bottleneck. Most systems using IDE/ATA drives go with 1 drive/channel
(2 is the IDE/ATA limit anyway). 20MB/sec (20MHz at 8-bit, 10MHz at
16-bit) would almost be sufficient for that. I heard that SATA limits
to 1 drive/channel anyway but the drive channel data rate on SATA is
so much higher than what is needed for these systems that it isn't an
advantage. SATA might be an advantage from a reliability/availability
perspective but I don't see it as a performance advantage in this
situation. Unless there was some sort of read-write-verify or
read-read-compare thing going on between the drive and controller,
this is not your bottleneck.

On the other hand, WD's Raptor drives are only available with SATA
interfaces (no PATA). Then the availability of design and quality of
drives is driven by the interface. Perhaps arbitrary, that's the way
it is.

Again, I wouldn't focus so much on trying to figure out how the
details of the design impact the performance. If you try the unit out
and it performs as you expect then you can assume the engineers made
the right price/performance tradeoffs in the design. I think you will
find that is easier to measure the performance of the box than try to
predict the performance of the box based on the knowledge of the
components that the engineers used.

On the other hand, I think you will find reliability and robustness of
the units to be more important issues in your choice of RAID
especially in this market segment. Unfortunately, these are more
difficult to measure. You really have to test how the unit performs
in stressful/error conditions over time. I suspect few people have
any way to measuring vibration in the drive bays but lots of vibration
can lead to early drive failure (I've heard performance problems in
extreme cases) and increases the probability of data loss/down time.
Unless the RAID's firmware is open source, I don't think you will be
able to evaluate it by inspection.

* newer chipsets have the appropriate asic on board which works much
faster
* newer chipsets suport more cache memory and aslo offer cache memory
battery backup ...


Question: will more cache memory help your performance? Is the ASIC
the bottleneck? If one unit achieves 70MB/sec (or 300MB/sec if you
prefer) using an old chipset and another achieves the same using a new
chipset, do you really care? If one unit achieves 200MB/sec using an
old chipset and another unit is only achieving 175MB/sec using
seemingly better components, which do you prefer?

By the way, dual-controller RAID units are often bottlenecked by
cache-coherency traffic between the two controllers. Sometimes more
is less.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
pc problems after g card upgrade + sp2 ben reed Homebuilt PC's 9 November 30th 04 02:04 AM
P4C800-E: XP doesn't see IDE drives on Promise or Intel controller Doug Montgomery Asus Motherboards 4 February 6th 04 08:41 AM
FPS Really LOW - Whats Wrong? John W. Ati Videocards 5 January 20th 04 09:09 AM
Incompatible RAID controller? @drian General 1 November 9th 03 08:38 PM
I think my FX5200 is damaged...........any way to verify? Dunny Rummy Nvidia Videocards 4 October 28th 03 05:50 PM


All times are GMT +1. The time now is 06:17 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.