HardwareBanter

HardwareBanter (http://www.hardwarebanter.com/index.php)
-   Storage (alternative) (http://www.hardwarebanter.com/forumdisplay.php?f=31)
-   -   Stick with onboard SATA controller instead of dedicated one, alongwith seperate 3ware? (http://www.hardwarebanter.com/showthread.php?t=161380)

markm75 December 3rd 07 09:02 PM

Stick with onboard SATA controller instead of dedicated one, alongwith seperate 3ware?
 
I'm brainstorming a new server.. i've decided to go with a 12 drive
case for this one.. which means the 8 port pci-e 3ware card now needs
to be a 12 port..

But.. this motherboard
http://www.atacom.com/program/print_...&USER_I D=www
already has 6 sata and raid5 and raid10 ability...

Would it be a bad idea to use the onboard one for the extra 4 drives?

This server will house 3 sql instances, file serving, antivirus
central server, but the user base is 40 and 2 of those sql instances
are low hit rates.

Would i see that much more performance for the extra $250 it would
cost to put them all on the same card.. or in fact.. might it be
better just to split them up in fact.

Thanks for any input here.

Arno Wagner December 3rd 07 09:43 PM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Previously markm75 wrote:
I'm brainstorming a new server.. i've decided to go with a 12 drive
case for this one.. which means the 8 port pci-e 3ware card now needs
to be a 12 port..


But.. this motherboard
http://www.atacom.com/program/print_...&USER_I D=www
already has 6 sata and raid5 and raid10 ability...


Would it be a bad idea to use the onboard one for the extra 4 drives?


This server will house 3 sql instances, file serving, antivirus
central server, but the user base is 40 and 2 of those sql instances
are low hit rates.


Would i see that much more performance for the extra $250 it would
cost to put them all on the same card.. or in fact.. might it be
better just to split them up in fact.


Depends on your access patterns. On-board ''RAID'' is typically
fakeRAID. It is not faster than software-RAID, but less flexible
and if the board dies you may have a recovery problem.

If you are satisfied with the 3ware controller's performance but
need 12 RAIDed disks, I would advise you to get a 12 port
controller. Otherwise put a true software RAID on the remaining
4 disks.

Arno

lars December 6th 07 08:13 PM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
markm75 wrote:

I'm brainstorming a new server.. i've decided to go with a 12 drive
case for this one.. which means the 8 port pci-e 3ware card now needs
to be a 12 port..

But.. this motherboard

http://www.atacom.com/program/print_...&USER_I D=www
already has 6 sata and raid5 and raid10 ability...

Would it be a bad idea to use the onboard one for the extra 4 drives?

This server will house 3 sql instances, file serving, antivirus
central server, but the user base is 40 and 2 of those sql instances
are low hit rates.

Would i see that much more performance for the extra $250 it would
cost to put them all on the same card.. or in fact.. might it be
better just to split them up in fact.

Thanks for any input here.



"This is for random operations with small block sizes. At large block sizes,
the Mtron
drive is 10-40% faster than a 15K SSD, mostly depending on what part of the
HDD
you are accessing."

Think you aught to currect this part... "15K SSD" ??

I think great stuff otherwise. SSD's have to go into RAID systems, think so
to, nice to see someone agreeing on this.

Any thoughts about RAID 6, is it needed on SSD's, large HDD 144GB+ yes but
large SSD - think rebuild time aught to be lower with SSD ??



Arno Wagner December 7th 07 01:18 AM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Previously lars wrote:
markm75 wrote:


I'm brainstorming a new server.. i've decided to go with a 12 drive
case for this one.. which means the 8 port pci-e 3ware card now needs
to be a 12 port..

But.. this motherboard

http://www.atacom.com/program/print_...&USER_I D=www
already has 6 sata and raid5 and raid10 ability...

Would it be a bad idea to use the onboard one for the extra 4 drives?

This server will house 3 sql instances, file serving, antivirus
central server, but the user base is 40 and 2 of those sql instances
are low hit rates.

Would i see that much more performance for the extra $250 it would
cost to put them all on the same card.. or in fact.. might it be
better just to split them up in fact.

Thanks for any input here.



"This is for random operations with small block sizes. At large block sizes,
the Mtron
drive is 10-40% faster than a 15K SSD, mostly depending on what part of the
HDD
you are accessing."


Think you aught to currect this part... "15K SSD" ??


I think great stuff otherwise. SSD's have to go into RAID systems, think so
to, nice to see someone agreeing on this.


Any thoughts about RAID 6, is it needed on SSD's, large HDD 144GB+ yes but
large SSD - think rebuild time aught to be lower with SSD ??


SSDs are close to computer memory. Adding an external RAID layer to
them sounds like a very wrong thing to do to me. Any redundancy
should be done in teh SSD itself.

Arno

lars December 7th 07 03:54 PM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Arno Wagner wrote:

SSDs are close to computer memory. Adding an external RAID layer to
them sounds like a very wrong thing to do to me. Any redundancy
should be done in teh SSD itself.

Arno


You know that in high end IBM and other servers RAID 1 for the memory is
availably.

And come to think of it there is still the problem of FRU if doing the
redundancy in the SSD itself. So for real life in a sever, no sir RAID
still has its advances - otherwise 24x7 can't be done.


Arno Wagner December 8th 07 10:58 AM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Previously lars wrote:
Arno Wagner wrote:


SSDs are close to computer memory. Adding an external RAID layer to
them sounds like a very wrong thing to do to me. Any redundancy
should be done in teh SSD itself.

Arno


You know that in high end IBM and other servers RAID 1 for the memory is
availably.


RAID for memory? Are you sure you are not talking about ECC,
which is far better for memory? I do not expect this type of really,
really bad engineering out of IBM, unless the customer demands
it (due to incompetence). Care to list a reference?

And come to think of it there is still the problem of FRU if doing the
redundancy in the SSD itself. So for real life in a sever, no sir RAID
still has its advances - otherwise 24x7 can't be done.


RAID is fine. But RAID for SSDs is just a sign that the technology
is not being understood. The problem here is that while HDDs
a) typically fail as a unit and b) typically notice when they are
failing, RAID makes a lot of sense with HDDs. SSDs are more likely
to fail in memory locations. A quality SSD can compensate with ECC.
If it cannot, then its controller chip is shot )or something very, very
unlikely happened) and it may give arbitrary wrong data to the
user. RAID does not help at all in this case. Of course this is
simplified.

Arno

lars December 8th 07 08:35 PM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Arno Wagner wrote:

RAID for memory? Are you sure you are not talking about ECC,
which is far better for memory? I do not expect this type of really,
really bad engineering out of IBM, unless the customer demands
it (due to incompetence). Care to list a reference?


YES sure!

Go to www.ibm.com do a search for "memory mirrored" you will among other
things find xSeries models and tech papers on this.

http://researchweb.watson.ibm.com/jo.../tremaine.html
and many many other.

And come to think of it there is still the problem of FRU if doing the
redundancy in the SSD itself. So for real life in a sever, no sir RAID
still has its advances - otherwise 24x7 can't be done.


RAID is fine. But RAID for SSDs is just a sign that the technology
is not being understood. The problem here is that while HDDs
a) typically fail as a unit and b) typically notice when they are
failing, RAID makes a lot of sense with HDDs. SSDs are more likely
to fail in memory locations. A quality SSD can compensate with ECC.
If it cannot, then its controller chip is shot )or something very, very
unlikely happened) and it may give arbitrary wrong data to the
user. RAID does not help at all in this case. Of course this is
simplified.

Arno


Still won't do in real life, SSD simply can't be sold as a "never failing
single device". There is no market in highend IT for such a thing!
A solution with the possibility of handling the SSD as a FRU is what it
takes - without question.

Not that I like refering to specific products, but this company I think is
close me my own thinking and the use of SSD's.
http://www.bitmicro.com/solutions_apps_comp_raidsys.php

Now...
Please take a good cup of coffee, and sit down for a while.

Having great new technology coming to market, is well... great :-)
But if 24x7 handling can't be given, then the new technology will have no
place in high end solutions. Solution keeps running while having failed
part replaced is what defines highend storage today - I think.

SSD will be banned to live its hole life in laptops etc.
And "never failing single device" well not good enough for highend storage
solutions..

So even if you are right SSD "technology is not being understood", you have
to understand the storage market better I think.
So to put it as clear as I can, even if you beeing right today about SSD it
will take years (decades) before you could sell such stuff to highend
without extra security.



Folkert Rienstra December 8th 07 11:39 PM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Arno Wagner wrote in
Previously lars wrote:
Arno Wagner wrote:


SSDs are close to computer memory. Adding an external RAID layer to
them sounds like a very wrong thing to do to me. Any redundancy
should be done in teh SSD itself.

Arno


You know that in high end IBM and other servers RAID 1 for the memory is
availably.


RAID for memory?


Hard of hearing now too, babblehead?

Are you sure you are not talking about ECC,
which is far better for memory? I do not expect this type of really,
really bad engineering out of IBM, unless the customer demands
it (due to incompetence). Care to list a reference?

And come to think of it there is still the problem of FRU if doing the
redundancy in the SSD itself. So for real life in a sever, no sir RAID
still has its advances - otherwise 24x7 can't be done.


RAID is fine. But RAID for SSDs is just a sign that the technology
is not being understood. The problem here is that while HDDs
a) typically fail as a unit and b) typically notice when they are
failing, RAID makes a lot of sense with HDDs. SSDs are more likely
to fail in memory locations. A quality SSD can compensate with ECC.
If it cannot, then its controller chip is shot )or something very, very
unlikely happened) and it may give arbitrary wrong data to the
user. RAID does not help at all in this case.


Of course this is simplified.


Like you yourself are 'simple', babblehead.


Arno


Arno Wagner December 10th 07 03:39 AM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Previously lars wrote:
Arno Wagner wrote:


RAID for memory? Are you sure you are not talking about ECC,
which is far better for memory? I do not expect this type of really,
really bad engineering out of IBM, unless the customer demands
it (due to incompetence). Care to list a reference?


YES sure!


Go to www.ibm.com do a search for "memory mirrored" you will among other
things find xSeries models and tech papers on this.


http://researchweb.watson.ibm.com/jo.../tremaine.html
and many many other.


Ok, that is for very high availability stuff only. And it is
not RAID in thst strict sense. It is mirrored memory that can
continue to operate through ECC failure. Unless the controller
and bus system is orders of magnitude more reliable than the
memory used, this makes no sense at al technologicaly.


And come to think of it there is still the problem of FRU if doing the
redundancy in the SSD itself. So for real life in a sever, no sir RAID
still has its advances - otherwise 24x7 can't be done.


RAID is fine. But RAID for SSDs is just a sign that the technology
is not being understood. The problem here is that while HDDs
a) typically fail as a unit and b) typically notice when they are
failing, RAID makes a lot of sense with HDDs. SSDs are more likely
to fail in memory locations. A quality SSD can compensate with ECC.
If it cannot, then its controller chip is shot )or something very, very
unlikely happened) and it may give arbitrary wrong data to the
user. RAID does not help at all in this case. Of course this is
simplified.

Arno


Still won't do in real life, SSD simply can't be sold as a "never failing
single device". There is no market in highend IT for such a thing!
A solution with the possibility of handling the SSD as a FRU is what it
takes - without question.


I never vclaimed that. But I expect a SSD is as reliable as the
disk controller in the first place and reduyndancy by multiple
SSDs makes no sense.

Not that I like refering to specific products, but this company I think is
close me my own thinking and the use of SSD's.
http://www.bitmicro.com/solutions_apps_comp_raidsys.php


Now...
Please take a good cup of coffee, and sit down for a while.


Having great new technology coming to market, is well... great :-)
But if 24x7 handling can't be given, then the new technology will have no
place in high end solutions. Solution keeps running while having failed
part replaced is what defines highend storage today - I think.


See above.

SSD will be banned to live its hole life in laptops etc.
And "never failing single device" well not good enough for highend storage
solutions..


So even if you are right SSD "technology is not being understood", you have
to understand the storage market better I think.


Not at all. I am not in the business of ripping customers off. I am a
customer that is an expert at the same time. As with the memory,
redundancy by having multiple SSDs only makes sense if the RAID
controller used is orders of magnitude more reliable than the SSD. I
doubt that is the case for typical SSDs and controllers. For HDDs it
is routinely the case.

So to put it as clear as I can, even if you beeing right today about
SSD it will take years (decades) before you could sell such stuff to
highend without extra security.


a) It is "safety" or "redundancy", not "security".
b) People buy all kinds of stupid stuff, because they do not understand
science and/or engineering.
c) A pair of high-reliability SSDs,
together with a high reliability RAID-1 controller is actually
less reliable than using one of the SSDs directly, if we assume
that a simple non-RAID controller is a lot more reliable. Do the
math. The assumption has merit, because the RAID-1 controller is
much more complex than a simple controller.

Arno


lars December 10th 07 07:58 AM

Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
 
Arno Wagner wrote:

Ok, that is for very high availability stuff only. And it is
not RAID in thst strict sense. It is mirrored memory that can
continue to operate through ECC failure. Unless the controller
and bus system is orders of magnitude more reliable than the
memory used, this makes no sense at al technologicaly.

Mirrored is RAID 1!

I know clustered OS has come around to stay. But still being able to run
until next service opportunity to replace memory, great stuff if you really
need it.


Arno Wagner wrote:

I never vclaimed that. But I expect a SSD is as reliable as the
disk controller in the first place and reduyndancy by multiple
SSDs makes no sense.


No No disk controller. Highend. Eg. HDS 99xx systems - Doing 7D+1P uses 8
disk controllers, so also the disk controller can be replaced as FRU with
running disk system. In each array each disk only uses a given disk
controller for one disk in the array. Each disk controller then beeing used
for many disk arrays.
Think IBM highend disk systems, 8300 (and lower), as well.

And yes disk controllers they do fail.



All times are GMT +1. The time now is 03:10 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com