View Single Post
  #4  
Old November 28th 04, 04:49 PM
news.tele.dk
external usenet poster
 
Posts: n/a
Default



I wouldn't recommend using SATA drives for online transactions, assuming
that's what you intend.
Using SCSI drives is what I would recommend for I/O intensive
applications.


SATA was chosen based on cost/benefit.
We figured we could buy approx. 4 times the amount of SATA disks
compared to SCSI.

Some of Tom's hardware recent tests was very promissing abount SATA
compared to SCSI...


What is your target bandwidth? Also, are you sure high bandwidth is what
you need? To me it sounds like you're going to need IOPS (I/O per second).
I'm not sure what specific SQL engine you'll be using, but generally SQL
uses an I/O size of 2-8 kB for online transactions. Data warehouse is
another matter.


We don't have a target bandwidth, the system is bought to host a rather new
product, and the load right now is rather low.
The server can actually easy handle the current load, but when it first is
set in production (at an external hosting partner),
you know how hard it is to upgrade, so we would like it to last as long as
possible = maximize the current configuration.

So we could live with the current configuration... but if we could get the
same I/O speed for less money, why buy the
top-of-the-line-adaptec-sata-card ??? money out the window I say.

Further more, we expect to buy two machines and have the partitions of the
server (linux) be mirrored
via http://www.drbd.org/ (via a crossed GLAN cable), so a bad IO card is
actually two :-)

I don't have any figures on-hand what kind of I/O rate a single SATA drive
can do while keeping a reasonable response time (20 ms being the maximum
value I've learned) but considering the drive's specs are 7,2 krpm and
average seek 9.3 ms (taken from the data sheet of a Maxtor DiamondMax Plus
9) I don't expect this drive to be able to handle more than maybe 100-120
IOPS in a random r/w I/O pattern. Let's for arguments sake say it can
deliver up to 200 IOPS, and each I/O is 8kB. The total bandwidth would
then be 200*8*12 (12 is the number of drives in your setup) = 19.2 MB/s.
This is of course not factoring in RAID overhead.


That's why we bought the server with a fair amount of RAM, we should be able
to have the
currently active database objects in RAM, and a fair speed when seeking in
the "archives" (which is actually also your later stated point)

The battery option should take care of I/O writes.


AAAAArrhhhggggggggg money out the window....

Possibly, but it might depend on what you consider "massive SQL-database
transactions". Then again, in my experience many on-line SQL systems I've
delivered the disk subsystem to hardly uses any disk resources during
normal operation. Most transactions are handled in RAM, and never see the
disks. RAM being so cheap today, it makes good sense cramming as much
memory into the host and not worry about the disk subsystem. Perhaps this
is what you'll see too.


I've examined benchmarks on other cards posted on the net, unfortunaly no
one
has tested the 21610SA against other cards (I wonder why?)

We should be able to get at least 400-600 mb/sec bandwidth to the disk
system.

I can see that our server-supplier also sells 3Ware, we will try to buy the
12
port SATA (9500) card next week, I wonder why they didn't mentioned all this
to us when we bought the system.

If I get the time I'll maybe post the tests.

mvh,
Carsten