If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#41
|
|||
|
|||
In article ,
"Peter" wrote: reliability: SCSI MTBF 1,200,000 hours, many SATA drives only run to 600,000 MTBF (http://searchstorage.techtarget.com/.../0,294276,sid5 gci1001942 tax294586,00.html) What do those numbers actually mean? 1,200,000 hours is 136 years. So this number taken at face value is pretty silly because it's essentially saying it won't be until sometime in 22nd century before just first SCSI hard disk anywhere on Earth fails! You are completely wrong (did you ever studied statistics?). Reread what I wrote carefully, and you will see that is quite correct. |
#42
|
|||
|
|||
In article ,
"J. Clarke" wrote: For enterprise storage replacing drives every two years would be very costly. The price of the drives is peanuts compared to the cost of downtime. This seems to imply nobody ever buys new equipment. |
#43
|
|||
|
|||
In article ,
Malcolm Weir wrote: Only if the network traffic has a 1:1 correspondence with the disk traffic. This is very, very rare in large environments. Plus you have forgotten that Ethernet is full-duplex... I don't know whether to interpret this as saying everyone is on a very slow network or a very fast network. Those scenarios are probably very rare, so either way, it supports my argument. SATA is plenty fast enough on a server to handle clients are an ordinary Ethernet network. |
#44
|
|||
|
|||
On Thu, 02 Dec 2004 07:21:47 GMT, flux wrote:
In article , Malcolm Weir wrote: Only if the network traffic has a 1:1 correspondence with the disk traffic. This is very, very rare in large environments. Plus you have forgotten that Ethernet is full-duplex... I don't know whether to interpret this as saying everyone is on a very slow network or a very fast network. Those scenarios are probably very rare, so either way, it supports my argument. SATA is plenty fast enough on a server to handle clients are an ordinary Ethernet network. *Sigh* You don't get out much, do you? Outside the trivial case of file and print serving, people use applications like, say, databases. Which means that a few bytes sent across the network (say, "SELECT x.y where x.id = z and y.thing = 123") result in many, many disk accesses, totalling gigabytes of IO. Now, has it dawned on you that even the most rudimentary of network servers has multiple NICs? Why do you think that is? Are server manufacturers silly? I strongly suspect that all your experience has been with the trivial case, where you have (at most) a few file-sharing clients on a network. In these case, you are right. But there's no money in that market, since any fool can build such a system. Where *hard* problems are, at least for those of us in comp.arch.storage, it is assumed that the network problem is already solved. Need 10GB/sec of network bandwidth and don't have a 10G Ethernet? Simply trunk 10 1000BaseT nets to your switch! Cisco (and the like) can handle that part of the problem. Secondly, "very rare" is relative. In terms of "raw number of installations", you could be right. In terms of "Organizations with an IT budget in excess of $100000 per year", you are wrong. Guess who the vendors care about? Malc. |
#45
|
|||
|
|||
On Thu, 02 Dec 2004 07:13:57 GMT, flux wrote:
In article , "J. Clarke" wrote: For enterprise storage replacing drives every two years would be very costly. The price of the drives is peanuts compared to the cost of downtime. This seems to imply nobody ever buys new equipment. No, it doesn't. It implies that enterprises would rather replace drives every three years, not every two, and would rather replace them every four years than every three, etc. Are you being deliberately obtuse? Malc. |
#46
|
|||
|
|||
On Thu, 02 Dec 2004 07:12:26 GMT, flux wrote:
In article , "Peter" wrote: reliability: SCSI MTBF 1,200,000 hours, many SATA drives only run to 600,000 MTBF (http://searchstorage.techtarget.com/.../0,294276,sid5 gci1001942 tax294586,00.html) What do those numbers actually mean? 1,200,000 hours is 136 years. So this number taken at face value is pretty silly because it's essentially saying it won't be until sometime in 22nd century before just first SCSI hard disk anywhere on Earth fails! You are completely wrong (did you ever studied statistics?). Reread what I wrote carefully, and you will see that is quite correct. No, it isn't. You claimed that the "mean time between failures" being 136 years is equivalent to the claim that the minimum time to failure is 136 years. I'll reduce it to terms simple enough for anyone: take an ordinary dice. I hope you can see that the mean number of throws between getting a "1" is 6, since if you toss it 60 times, you can expect to see a "1" about 10 heads. You've just claimed that it's essentially saying you won't see a "1" until you've tossed the dice six times. Which is nonsense. Try it and see. Malc. |
#47
|
|||
|
|||
On Thu, 02 Dec 2004 07:09:02 GMT, flux wrote:
[ Snip ] The fact that SATA and SCSI have similar warranties clearly indicates the companies have enough confidence in SATA that they consider it durable. No, it doesn't. They *may* have that confidence (although when asked, they contradict you, but what do they know?), but you are ignoring other factors. Ask any marketing professional about "take up" rates. For any offer, service, or program that a manufacturer provides, some proportion of customers won't take advantage of it even when they could. Sometimes this is because they lose necessary documentation, other times because they forget, and still more because they don't care about replacing the failed unit with another equivalent unit (e.g. if you're going through the hassle of replacing the thing, why not upgrade at the same time?) A logical rebuttal might be that manufacturers could offer lifetime warranties on SCSI drives because they are just that durable, but a warranty that long doesn't make sense from a marketing point of view because the manufacturers do want their customers to upgrade eventually. You call *that* "logical"? The fact is that drives are designed with a service life in mind. This allows them to be sealed units, with no replaceable filters, no lubrication points, etc. This makes them cheaper. (You know we used to have disks that had filters, lubrication, etc., right?) [ Snip ] It doesn't. Marketing decided that to sell drives in their target market they needed a 5 year warranty, and they built the cost of that into the price of the drive. It clearly does. If the failure rate for drives increases in the fourth and fifth year, it will cost the company money to replace the broken drives. Do you really believe that the same proportion of people take manufacturers up on the warranty after (say) 3 years as do after 1 month? Any enterprise that was using them for mission-critical storage. Don't assume that the enterprise market is like a home user who can shut his machine down on a whim. This is also illogical. It's like saying you can't ever upgrade. Your version of "logic" is somewhat, umm, naive! Malc. |
#48
|
|||
|
|||
flux wrote:
In article , "J. Clarke" wrote: It looks like the "plain" Maxtor's get a 3 year warranty and the "high-end" get five, same as their SCSI counterparts. By that reasoning Hyundai makes the best car on Earth. Warranty length is related to marketing concerns, not to durability. No, your analogy doesn't fly because warranties cost money if the drives have a high-failure rate. The reason companies can offer warranties of such lengths is because their products are least durable enough to make it cost-effective. But they are basing their warranty calculations on how the drive is used, and (with the exception of WD's 10K drives) they expect them to go into PC devices which don't run 24x7, so the MTBF is expected to be stretched because the drive is spending a good deal of its time doing very little or powered down. -- Nik Simpson |
#49
|
|||
|
|||
flux wrote:
In article , Malcolm Weir wrote: Only if the network traffic has a 1:1 correspondence with the disk traffic. This is very, very rare in large environments. Plus you have forgotten that Ethernet is full-duplex... I don't know whether to interpret this as saying everyone is on a very slow network or a very fast network. Those scenarios are probably very rare, so either way, it supports my argument. SATA is plenty fast enough on a server to handle clients are an ordinary Ethernet network. "Plenty fast enough" varies considerably from one application to another. Yes, SATA is fast enough for some types of application, but that doesn't translate to fast enough for all applications. -- Nik Simpson |
#50
|
|||
|
|||
You are completely wrong (did you ever studied statistics?).
Reread what I wrote carefully, and you will see that is quite correct. Yes, I did. You have said: "So this number taken at face value is pretty silly because it's essentially saying it won't be until sometime in 22nd century before just first SCSI hard disk anywhere on Earth fails!" No your understanding is NOT correct, MTBF number does not imply that! |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
.cl3 / adaptec | Lo Dolce Pesca | General | 0 | April 10th 04 01:51 AM |
Adaptec vs. Western Digital. Who is DEGRADED? Who is FOS? | Brian | General | 0 | January 13th 04 05:16 PM |
What the heck did I do wrong? Fried my A7N8X Deluxe? | Don Burnette | Asus Motherboards | 19 | December 1st 03 06:41 AM |
Can the Adaptec 3210S do RAID 1+5? | Rick Kunkel | Storage & Hardrives | 2 | October 16th 03 02:25 AM |
Install Problems with an Adaptec 2400a RAID Controller! | Starz_Kid | General | 1 | June 24th 03 03:44 AM |