A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

What am I doing wrong ??? Or is Adaptec 21610SA just a crappy RAID card ?



 
 
Thread Tools Display Modes
  #1  
Old November 28th 04, 01:15 AM
news.tele.dk
external usenet poster
 
Posts: n/a
Default What am I doing wrong ??? Or is Adaptec 21610SA just a crappy RAID card ?

Hi,

We've bought the following server:

2 x Xeon 3.2Ghz 1mb lvl2 cache
Intel Server Board SE7320SP2LX
4 Gb of DDR400 REG/ECC
12 x +9 SATA, 120GB, 8MB, Fluid (in hotswap casings)

And put in an Adaptec 21610SA + battery option.

The server should do massive SQL-database transactions.

Just to test the setup we created on big stripe with all the disk's ....

But no matter what we cannot get read/write performance to exeed 100mb/sec
Considering that we can do around 55mb/sec on just one of the disk's
(configured as a single volume),
we consider this SUB-optimal...

Are we doing something wrong or is it just a very crappy card ???
We are sure it is mounted in a PCI-X (66Mhz / 64 bit) slot, so that's not
it.

I've also tested the system with bonnie :

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
dhcp157.rgm-int. 8G 38581 98 83689 46 34724 17 30641 65 73920 23 571.3 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2885 95 +++++ +++ +++++ +++ 2905 99 +++++ +++ 8293 98 dhcp157.rgm
int.dk,8G,38581,98,83689,46,34724,17,30641,65,7392 0,23,571.3,1,16,2885,95,+++++,+++,+++++,+++,2905,9 9,+++++,+++,8293,98


AAAAArrhhhggggggggg money out the window....


  #2  
Old November 28th 04, 12:09 PM
Charles Morrall
external usenet poster
 
Posts: n/a
Default


"news.tele.dk" skrev i meddelandet
. ..
Hi,

We've bought the following server:

2 x Xeon 3.2Ghz 1mb lvl2 cache
Intel Server Board SE7320SP2LX
4 Gb of DDR400 REG/ECC
12 x +9 SATA, 120GB, 8MB, Fluid (in hotswap casings)

And put in an Adaptec 21610SA + battery option.

The server should do massive SQL-database transactions.

I wouldn't recommend using SATA drives for online transactions, assuming
that's what you intend.
Using SCSI drives is what I would recommend for I/O intensive applications.

Just to test the setup we created on big stripe with all the disk's ....

But no matter what we cannot get read/write performance to exeed 100mb/sec
Considering that we can do around 55mb/sec on just one of the disk's
(configured as a single volume),
we consider this SUB-optimal...

What is your target bandwidth? Also, are you sure high bandwidth is what you
need? To me it sounds like you're going to need IOPS (I/O per second). I'm
not sure what specific SQL engine you'll be using, but generally SQL uses an
I/O size of 2-8 kB for online transactions. Data warehouse is another
matter.
I don't have any figures on-hand what kind of I/O rate a single SATA drive
can do while keeping a reasonable response time (20 ms being the maximum
value I've learned) but considering the drive's specs are 7,2 krpm and
average seek 9.3 ms (taken from the data sheet of a Maxtor DiamondMax Plus
9) I don't expect this drive to be able to handle more than maybe 100-120
IOPS in a random r/w I/O pattern. Let's for arguments sake say it can
deliver up to 200 IOPS, and each I/O is 8kB. The total bandwidth would then
be 200*8*12 (12 is the number of drives in your setup) = 19.2 MB/s. This is
of course not factoring in RAID overhead.


Are we doing something wrong or is it just a very crappy card ???
We are sure it is mounted in a PCI-X (66Mhz / 64 bit) slot, so that's not
it.

No, most likely not.


I've also tested the system with bonnie :

[snip]
Sorry, I can't interpret the output from bonnie.


AAAAArrhhhggggggggg money out the window....

Possibly, but it might depend on what you consider "massive SQL-database
transactions". Then again, in my experience many on-line SQL systems I've
delivered the disk subsystem to hardly uses any disk resources during normal
operation. Most transactions are handled in RAM, and never see the disks.
RAM being so cheap today, it makes good sense cramming as much memory into
the host and not worry about the disk subsystem. Perhaps this is what you'll
see too.
Good luck!
/charles


  #3  
Old November 28th 04, 02:56 PM
Arno Wagner
external usenet poster
 
Posts: n/a
Default

In comp.sys.ibm.pc.hardware.storage news.tele.dk wrote:
Hi,


We've bought the following server:


2 x Xeon 3.2Ghz 1mb lvl2 cache
Intel Server Board SE7320SP2LX
4 Gb of DDR400 REG/ECC
12 x +9 SATA, 120GB, 8MB, Fluid (in hotswap casings)


And put in an Adaptec 21610SA + battery option.


The server should do massive SQL-database transactions.


Just to test the setup we created on big stripe with all the disk's ....


But no matter what we cannot get read/write performance to exeed 100mb/sec
Considering that we can do around 55mb/sec on just one of the disk's
(configured as a single volume),
we consider this SUB-optimal...


Are we doing something wrong or is it just a very crappy card ???
We are sure it is mounted in a PCI-X (66Mhz / 64 bit) slot, so that's not
it.


I have recently had bad experiences with an adaptex SATA raid
card for 8 disks. Bedises being unreliable and having unusable
software, it was also quite slow (66MHz/64bit PCI). I how have
the 8 disks on two promise 150TX4 with software-RAID5 (linux 2.6.9)
and that is faster!

I would say the card is overpriced trash. 3ware has a good name,
maybe try their cards. You can also try a pair of the promise
SX8 cards and software-RAID.

AAAAArrhhhggggggggg money out the window....


Yes, I felt that way too. Adaptec is not getting any momey from
me for the next decade or so. Their SATA producst are a rip-off
IMO.

Arno
--
For email address: lastname AT tik DOT ee DOT ethz DOT ch
GnuPG: ID:1E25338F FP:0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25 338F
"The more corrupt the state, the more numerous the laws" - Tacitus


  #4  
Old November 28th 04, 05:49 PM
news.tele.dk
external usenet poster
 
Posts: n/a
Default



I wouldn't recommend using SATA drives for online transactions, assuming
that's what you intend.
Using SCSI drives is what I would recommend for I/O intensive
applications.


SATA was chosen based on cost/benefit.
We figured we could buy approx. 4 times the amount of SATA disks
compared to SCSI.

Some of Tom's hardware recent tests was very promissing abount SATA
compared to SCSI...


What is your target bandwidth? Also, are you sure high bandwidth is what
you need? To me it sounds like you're going to need IOPS (I/O per second).
I'm not sure what specific SQL engine you'll be using, but generally SQL
uses an I/O size of 2-8 kB for online transactions. Data warehouse is
another matter.


We don't have a target bandwidth, the system is bought to host a rather new
product, and the load right now is rather low.
The server can actually easy handle the current load, but when it first is
set in production (at an external hosting partner),
you know how hard it is to upgrade, so we would like it to last as long as
possible = maximize the current configuration.

So we could live with the current configuration... but if we could get the
same I/O speed for less money, why buy the
top-of-the-line-adaptec-sata-card ??? money out the window I say.

Further more, we expect to buy two machines and have the partitions of the
server (linux) be mirrored
via http://www.drbd.org/ (via a crossed GLAN cable), so a bad IO card is
actually two :-)

I don't have any figures on-hand what kind of I/O rate a single SATA drive
can do while keeping a reasonable response time (20 ms being the maximum
value I've learned) but considering the drive's specs are 7,2 krpm and
average seek 9.3 ms (taken from the data sheet of a Maxtor DiamondMax Plus
9) I don't expect this drive to be able to handle more than maybe 100-120
IOPS in a random r/w I/O pattern. Let's for arguments sake say it can
deliver up to 200 IOPS, and each I/O is 8kB. The total bandwidth would
then be 200*8*12 (12 is the number of drives in your setup) = 19.2 MB/s.
This is of course not factoring in RAID overhead.


That's why we bought the server with a fair amount of RAM, we should be able
to have the
currently active database objects in RAM, and a fair speed when seeking in
the "archives" (which is actually also your later stated point)

The battery option should take care of I/O writes.


AAAAArrhhhggggggggg money out the window....

Possibly, but it might depend on what you consider "massive SQL-database
transactions". Then again, in my experience many on-line SQL systems I've
delivered the disk subsystem to hardly uses any disk resources during
normal operation. Most transactions are handled in RAM, and never see the
disks. RAM being so cheap today, it makes good sense cramming as much
memory into the host and not worry about the disk subsystem. Perhaps this
is what you'll see too.


I've examined benchmarks on other cards posted on the net, unfortunaly no
one
has tested the 21610SA against other cards (I wonder why?)

We should be able to get at least 400-600 mb/sec bandwidth to the disk
system.

I can see that our server-supplier also sells 3Ware, we will try to buy the
12
port SATA (9500) card next week, I wonder why they didn't mentioned all this
to us when we bought the system.

If I get the time I'll maybe post the tests.

mvh,
Carsten


  #5  
Old November 28th 04, 06:16 PM
Rita Ä Berkowitz
external usenet poster
 
Posts: n/a
Default

news.tele.dk wrote:

SATA was chosen based on cost/benefit.
We figured we could buy approx. 4 times the amount of SATA disks
compared to SCSI.


Yep, you got great cost savings without any benefits by going SATA. You
spent a good chunk of change on all your other hardware to degrade it back
to a gamer's machine or an eMachine for the sake of saving a few bucks.
That machine will only see it's full potential when you put U320 SCSI in it.

Some of Tom's hardware recent tests was very promissing abount SATA
compared to SCSI...


Again, another victim of the hype, propaganda, and other bull**** one finds
at Tom's Hardware.

Scrap the SATA garbage and get what you want in the first place, U320 SCSI.
Doing otherwise is looking for long-term problems and heartache. Good luck.


Rita
--
http://www.geocities.com/ritaberk2003/





  #6  
Old November 28th 04, 11:05 PM
news.tele.dk
external usenet poster
 
Posts: n/a
Default


I think you are blaming it on the wrong component here...

I'm quite sure it's not the disks, it's the controller, the server can do
quite good
on one disk, but trying to spread the load out on more than two disks and
the controller(?) breaks...

So the deal is, i'm not complaining about SATA (yet), i'm complaining about
a quite obvious crappy controller card from Adaptec.

The story also contains a "side-story", my brother bought an external SATA
based raid unit, which is doing very well compared to other SCSI unit's
(especially
if you add "cost" to the metric).

The interface to the unit is SCSI, but internally it used SATA disks. So to
my
understanding I should be able to get the same, just "internal" based with
the
right equipment (controller card).

My belief is that the disk market will eventually go SATA, SATA technology
will
get better and better fewer SCSI disks will be sold, therefore get more and
more
expensive, and finally die.

best regards,
Carsten



"Rita Ä Berkowitz" skrev i en meddelelse
...
news.tele.dk wrote:

SATA was chosen based on cost/benefit.
We figured we could buy approx. 4 times the amount of SATA disks
compared to SCSI.


Yep, you got great cost savings without any benefits by going SATA. You
spent a good chunk of change on all your other hardware to degrade it back
to a gamer's machine or an eMachine for the sake of saving a few bucks.
That machine will only see it's full potential when you put U320 SCSI in
it.

Some of Tom's hardware recent tests was very promissing abount SATA
compared to SCSI...


Again, another victim of the hype, propaganda, and other bull**** one
finds
at Tom's Hardware.

Scrap the SATA garbage and get what you want in the first place, U320
SCSI.
Doing otherwise is looking for long-term problems and heartache. Good
luck.


Rita
--
http://www.geocities.com/ritaberk2003/







  #7  
Old November 29th 04, 12:11 AM
Jesper Monsted
external usenet poster
 
Posts: n/a
Default

"news.tele.dk" wrote in
k:

My belief is that the disk market will eventually go SATA, SATA
technology will
get better and better fewer SCSI disks will be sold, therefore get
more and more
expensive, and finally die.


Nope. The market that uses SCSI now will use SAS-drives in the future and
FC will live for quite some time yet, although my guess is it'll get
replaced on the disk side of large arrays with SAS in the long run. PATA
will die and SATA will take the low- and lower midrange market.

--
/Jesper Monsted
  #8  
Old November 29th 04, 12:41 AM
Rita Ä Berkowitz
external usenet poster
 
Posts: n/a
Default

news.tele.dk wrote:

I think you are blaming it on the wrong component here...


Nope, SATA in general is to blame.

I'm quite sure it's not the disks, it's the controller, the server
can do quite good
on one disk, but trying to spread the load out on more than two disks
and the controller(?) breaks...


You're starting to see the joys of SATA. Unfortunately, it's costing you
time and money. Do a Google on SATA and this group and you'll quickly see
there are a lot of people in the same boat.

So the deal is, i'm not complaining about SATA (yet), i'm complaining
about a quite obvious crappy controller card from Adaptec.


You will be. SATA is all about novelty and hype. You get impressive specs
of storage and speed that costs pennies waved under your nose and you get
hooked. When you start putting it together and fall into the trap of lack
of reliability is when wish you never got involved with it. Believe me,
there are alot of people that bought into the hype and have since went back
to SCSI. It was a costly lesson.

The story also contains a "side-story", my brother bought an external
SATA based raid unit, which is doing very well compared to other SCSI
unit's (especially
if you add "cost" to the metric).


Using SATA for gaming machines and other novelty type boxes is the best
thing since sliced bread, but when you're running a business that depends on
reliability and uptime of their servers is where SATA costs you more money
in maintenance.

The interface to the unit is SCSI, but internally it used SATA disks.
So to my
understanding I should be able to get the same, just "internal" based
with the
right equipment (controller card).


This is the whole illusion with SATA, it wants to be like SCSI, but can't.
This isn't something that can go both ways.

My belief is that the disk market will eventually go SATA, SATA
technology will
get better and better fewer SCSI disks will be sold, therefore get
more and more
expensive, and finally die.


Nope, SCSI will never truly die. You might get variants of it (SAS), but it
will be with us for a long time in business class and enterprise machines.

Good luck, I hope you get it working.


Rita
--
http://www.geocities.com/ritaberk2003/





  #9  
Old November 29th 04, 01:08 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

news.tele.dk wrote:



I wouldn't recommend using SATA drives for online transactions, assuming
that's what you intend.
Using SCSI drives is what I would recommend for I/O intensive
applications.


SATA was chosen based on cost/benefit.
We figured we could buy approx. 4 times the amount of SATA disks
compared to SCSI.

Some of Tom's hardware recent tests was very promissing abount SATA
compared to SCSI...


All else being equal, SATA single drives seem to come pretty close to the
performance level of SCSI drives. But a high-end SATA drive is an
entry-level SCSI drive. Maybe that will change eventually. Right now SATA
has a way to go before it becomes a viable substitute even for PATA, let
alone SCSI.

What is your target bandwidth? Also, are you sure high bandwidth is what
you need? To me it sounds like you're going to need IOPS (I/O per
second). I'm not sure what specific SQL engine you'll be using, but
generally SQL uses an I/O size of 2-8 kB for online transactions. Data
warehouse is another matter.


We don't have a target bandwidth, the system is bought to host a rather
new product, and the load right now is rather low.
The server can actually easy handle the current load, but when it first is
set in production (at an external hosting partner),
you know how hard it is to upgrade, so we would like it to last as long as
possible = maximize the current configuration.

So we could live with the current configuration... but if we could get the
same I/O speed for less money, why buy the
top-of-the-line-adaptec-sata-card ??? money out the window I say.


Whoever told you that Adaptec was "top of the line" is an idiot. Adaptec
RAID controllers have never worked particularly well and their ATA RAID
controllers even less so. See what IBM uses in their servers--you'll find
that it's Mylex, which IBM spun off to LSI Logic a while back. LSI Logic
has a nice family of SATA RAID controllers that might be worth a look. You
could also look at 3Ware, which specializes in SATA RAID. Since you're
using an Intel server board, an Intel RAID controller (designs are similar
but not identical to LSI IIRC) might be another viable option.

What you're going to have to do though is try the various boards in your
server until you find one that hits your performance objectives or have
gone through all of them.

Further more, we expect to buy two machines and have the partitions of the
server (linux) be mirrored
via http://www.drbd.org/ (via a crossed GLAN cable), so a bad IO card is
actually two :-)

I don't have any figures on-hand what kind of I/O rate a single SATA
drive can do while keeping a reasonable response time (20 ms being the
maximum value I've learned) but considering the drive's specs are 7,2
krpm and average seek 9.3 ms (taken from the data sheet of a Maxtor
DiamondMax Plus 9) I don't expect this drive to be able to handle more
than maybe 100-120 IOPS in a random r/w I/O pattern. Let's for arguments
sake say it can deliver up to 200 IOPS, and each I/O is 8kB. The total
bandwidth would then be 200*8*12 (12 is the number of drives in your
setup) = 19.2 MB/s. This is of course not factoring in RAID overhead.


That's why we bought the server with a fair amount of RAM, we should be
able to have the
currently active database objects in RAM, and a fair speed when seeking in
the "archives" (which is actually also your later stated point)

The battery option should take care of I/O writes.


Huh? The only thing the battery option does is hold the data in the cache
in the event of a power outage until the power is restored. It has nothing
whatsoever to do with performance.

AAAAArrhhhggggggggg money out the window....

Possibly, but it might depend on what you consider "massive SQL-database
transactions". Then again, in my experience many on-line SQL systems I've
delivered the disk subsystem to hardly uses any disk resources during
normal operation. Most transactions are handled in RAM, and never see the
disks. RAM being so cheap today, it makes good sense cramming as much
memory into the host and not worry about the disk subsystem. Perhaps this
is what you'll see too.


I've examined benchmarks on other cards posted on the net, unfortunaly no
one
has tested the 21610SA against other cards (I wonder why?)

We should be able to get at least 400-600 mb/sec bandwidth to the disk
system.

I can see that our server-supplier also sells 3Ware, we will try to buy
the 12
port SATA (9500) card next week, I wonder why they didn't mentioned all
this to us when we bought the system.

If I get the time I'll maybe post the tests.

mvh,
Carsten


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #10  
Old November 29th 04, 02:10 AM
Nik Simpson
external usenet poster
 
Posts: n/a
Default

J. Clarke wrote:
news.tele.dk wrote:

That's why we bought the server with a fair amount of RAM, we should
be able to have the
currently active database objects in RAM, and a fair speed when
seeking in the "archives" (which is actually also your later stated
point)

The battery option should take care of I/O writes.


Huh? The only thing the battery option does is hold the data in the
cache in the event of a power outage until the power is restored. It
has nothing whatsoever to do with performance.



I'm guessing he's got write-thru cache enabled on the RAID controller and
believes that putting a battery backup for teh cache will allow him to
enable write caching. Of course his battery backup doesn't help if he gets a
RAM error/failure in the cache ;-)


--
Nik Simpson


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
.cl3 / adaptec Lo Dolce Pesca General 0 April 10th 04 01:51 AM
Adaptec vs. Western Digital. Who is DEGRADED? Who is FOS? Brian General 0 January 13th 04 06:16 PM
What the heck did I do wrong? Fried my A7N8X Deluxe? Don Burnette Asus Motherboards 19 December 1st 03 07:41 AM
Can the Adaptec 3210S do RAID 1+5? Rick Kunkel Storage & Hardrives 2 October 16th 03 02:25 AM
Install Problems with an Adaptec 2400a RAID Controller! Starz_Kid General 1 June 24th 03 03:44 AM


All times are GMT +1. The time now is 08:22 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.