If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, alongwith seperate 3ware?
I'm brainstorming a new server.. i've decided to go with a 12 drive
case for this one.. which means the 8 port pci-e 3ware card now needs to be a 12 port.. But.. this motherboard http://www.atacom.com/program/print_...&USER_I D=www already has 6 sata and raid5 and raid10 ability... Would it be a bad idea to use the onboard one for the extra 4 drives? This server will house 3 sql instances, file serving, antivirus central server, but the user base is 40 and 2 of those sql instances are low hit rates. Would i see that much more performance for the extra $250 it would cost to put them all on the same card.. or in fact.. might it be better just to split them up in fact. Thanks for any input here. |
#2
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Previously markm75 wrote:
I'm brainstorming a new server.. i've decided to go with a 12 drive case for this one.. which means the 8 port pci-e 3ware card now needs to be a 12 port.. But.. this motherboard http://www.atacom.com/program/print_...&USER_I D=www already has 6 sata and raid5 and raid10 ability... Would it be a bad idea to use the onboard one for the extra 4 drives? This server will house 3 sql instances, file serving, antivirus central server, but the user base is 40 and 2 of those sql instances are low hit rates. Would i see that much more performance for the extra $250 it would cost to put them all on the same card.. or in fact.. might it be better just to split them up in fact. Depends on your access patterns. On-board ''RAID'' is typically fakeRAID. It is not faster than software-RAID, but less flexible and if the board dies you may have a recovery problem. If you are satisfied with the 3ware controller's performance but need 12 RAIDed disks, I would advise you to get a 12 port controller. Otherwise put a true software RAID on the remaining 4 disks. Arno |
#3
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
markm75 wrote:
I'm brainstorming a new server.. i've decided to go with a 12 drive case for this one.. which means the 8 port pci-e 3ware card now needs to be a 12 port.. But.. this motherboard http://www.atacom.com/program/print_...&USER_I D=www already has 6 sata and raid5 and raid10 ability... Would it be a bad idea to use the onboard one for the extra 4 drives? This server will house 3 sql instances, file serving, antivirus central server, but the user base is 40 and 2 of those sql instances are low hit rates. Would i see that much more performance for the extra $250 it would cost to put them all on the same card.. or in fact.. might it be better just to split them up in fact. Thanks for any input here. "This is for random operations with small block sizes. At large block sizes, the Mtron drive is 10-40% faster than a 15K SSD, mostly depending on what part of the HDD you are accessing." Think you aught to currect this part... "15K SSD" ?? I think great stuff otherwise. SSD's have to go into RAID systems, think so to, nice to see someone agreeing on this. Any thoughts about RAID 6, is it needed on SSD's, large HDD 144GB+ yes but large SSD - think rebuild time aught to be lower with SSD ?? |
#4
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Previously lars wrote:
markm75 wrote: I'm brainstorming a new server.. i've decided to go with a 12 drive case for this one.. which means the 8 port pci-e 3ware card now needs to be a 12 port.. But.. this motherboard http://www.atacom.com/program/print_...&USER_I D=www already has 6 sata and raid5 and raid10 ability... Would it be a bad idea to use the onboard one for the extra 4 drives? This server will house 3 sql instances, file serving, antivirus central server, but the user base is 40 and 2 of those sql instances are low hit rates. Would i see that much more performance for the extra $250 it would cost to put them all on the same card.. or in fact.. might it be better just to split them up in fact. Thanks for any input here. "This is for random operations with small block sizes. At large block sizes, the Mtron drive is 10-40% faster than a 15K SSD, mostly depending on what part of the HDD you are accessing." Think you aught to currect this part... "15K SSD" ?? I think great stuff otherwise. SSD's have to go into RAID systems, think so to, nice to see someone agreeing on this. Any thoughts about RAID 6, is it needed on SSD's, large HDD 144GB+ yes but large SSD - think rebuild time aught to be lower with SSD ?? SSDs are close to computer memory. Adding an external RAID layer to them sounds like a very wrong thing to do to me. Any redundancy should be done in teh SSD itself. Arno |
#5
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Arno Wagner wrote:
SSDs are close to computer memory. Adding an external RAID layer to them sounds like a very wrong thing to do to me. Any redundancy should be done in teh SSD itself. Arno You know that in high end IBM and other servers RAID 1 for the memory is availably. And come to think of it there is still the problem of FRU if doing the redundancy in the SSD itself. So for real life in a sever, no sir RAID still has its advances - otherwise 24x7 can't be done. |
#6
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Previously lars wrote:
Arno Wagner wrote: SSDs are close to computer memory. Adding an external RAID layer to them sounds like a very wrong thing to do to me. Any redundancy should be done in teh SSD itself. Arno You know that in high end IBM and other servers RAID 1 for the memory is availably. RAID for memory? Are you sure you are not talking about ECC, which is far better for memory? I do not expect this type of really, really bad engineering out of IBM, unless the customer demands it (due to incompetence). Care to list a reference? And come to think of it there is still the problem of FRU if doing the redundancy in the SSD itself. So for real life in a sever, no sir RAID still has its advances - otherwise 24x7 can't be done. RAID is fine. But RAID for SSDs is just a sign that the technology is not being understood. The problem here is that while HDDs a) typically fail as a unit and b) typically notice when they are failing, RAID makes a lot of sense with HDDs. SSDs are more likely to fail in memory locations. A quality SSD can compensate with ECC. If it cannot, then its controller chip is shot )or something very, very unlikely happened) and it may give arbitrary wrong data to the user. RAID does not help at all in this case. Of course this is simplified. Arno |
#7
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Arno Wagner wrote:
RAID for memory? Are you sure you are not talking about ECC, which is far better for memory? I do not expect this type of really, really bad engineering out of IBM, unless the customer demands it (due to incompetence). Care to list a reference? YES sure! Go to www.ibm.com do a search for "memory mirrored" you will among other things find xSeries models and tech papers on this. http://researchweb.watson.ibm.com/jo.../tremaine.html and many many other. And come to think of it there is still the problem of FRU if doing the redundancy in the SSD itself. So for real life in a sever, no sir RAID still has its advances - otherwise 24x7 can't be done. RAID is fine. But RAID for SSDs is just a sign that the technology is not being understood. The problem here is that while HDDs a) typically fail as a unit and b) typically notice when they are failing, RAID makes a lot of sense with HDDs. SSDs are more likely to fail in memory locations. A quality SSD can compensate with ECC. If it cannot, then its controller chip is shot )or something very, very unlikely happened) and it may give arbitrary wrong data to the user. RAID does not help at all in this case. Of course this is simplified. Arno Still won't do in real life, SSD simply can't be sold as a "never failing single device". There is no market in highend IT for such a thing! A solution with the possibility of handling the SSD as a FRU is what it takes - without question. Not that I like refering to specific products, but this company I think is close me my own thinking and the use of SSD's. http://www.bitmicro.com/solutions_apps_comp_raidsys.php Now... Please take a good cup of coffee, and sit down for a while. Having great new technology coming to market, is well... great :-) But if 24x7 handling can't be given, then the new technology will have no place in high end solutions. Solution keeps running while having failed part replaced is what defines highend storage today - I think. SSD will be banned to live its hole life in laptops etc. And "never failing single device" well not good enough for highend storage solutions.. So even if you are right SSD "technology is not being understood", you have to understand the storage market better I think. So to put it as clear as I can, even if you beeing right today about SSD it will take years (decades) before you could sell such stuff to highend without extra security. |
#8
|
|||
|
|||
Stick with onboard SATA controller instead of dedicated one, along with seperate 3ware?
Arno Wagner wrote in
Previously lars wrote: Arno Wagner wrote: SSDs are close to computer memory. Adding an external RAID layer to them sounds like a very wrong thing to do to me. Any redundancy should be done in teh SSD itself. Arno You know that in high end IBM and other servers RAID 1 for the memory is availably. RAID for memory? Hard of hearing now too, babblehead? Are you sure you are not talking about ECC, which is far better for memory? I do not expect this type of really, really bad engineering out of IBM, unless the customer demands it (due to incompetence). Care to list a reference? And come to think of it there is still the problem of FRU if doing the redundancy in the SSD itself. So for real life in a sever, no sir RAID still has its advances - otherwise 24x7 can't be done. RAID is fine. But RAID for SSDs is just a sign that the technology is not being understood. The problem here is that while HDDs a) typically fail as a unit and b) typically notice when they are failing, RAID makes a lot of sense with HDDs. SSDs are more likely to fail in memory locations. A quality SSD can compensate with ECC. If it cannot, then its controller chip is shot )or something very, very unlikely happened) and it may give arbitrary wrong data to the user. RAID does not help at all in this case. Of course this is simplified. Like you yourself are 'simple', babblehead. Arno |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Should I use onboard network/firewire or dedicated pci cards? | Pete | General | 5 | August 7th 06 12:27 AM |
How to run smartctl with a 3ware controller? | void | Storage (alternative) | 13 | May 24th 06 04:37 AM |
P4PE Promise 376 SATA/RAID onboard controller disappeared from BIOS! | SGT | Asus Motherboards | 1 | July 6th 04 05:41 AM |
Two independent HDs on onboard SATA controller? | Rui Sá | Asus Motherboards | 4 | February 19th 04 02:32 PM |
Multi displays - onboard and seperate? | - | General | 0 | June 23rd 03 09:21 AM |