A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

ATA Reliability: Seagate, WD, Maxtor



 
 
Thread Tools Display Modes
  #52  
Old April 14th 05, 06:43 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
Curious George wrote:

What is the performance and what does it mean to choke? Are you actually
saying the hard drive starts slowing down and even stops outright,
crashes?


Performance drops off severely with multiple IO transactions/
demanding head use. More severely than I expected at least (my bad I
guess)


Let's see the data.

I don't see how it's more flexible. In fact, it's less flexible. SCSI
has to deal ID numbers and termination. This is irrelevant to SATA. SATA
also has slim cables.


Longer cable lengths,


Why do you need long cables?

boxes. Controllers tend to be better.


What does that mean exactly?

numbers & termination are far from complicated. Most sata doesn't
allow remote or delay start and led comes from the controller instead
of the drive.


Is that really the case?

ALL PITA for integration. Many PPL also complain about
cable retention & other PITA issues.


What do you mean by cable retention? You can get locking SATA cables.

Probably a real difference is the number of devices that can be
connected a to a controller.

Most SATA drives lack normal
jumper options of scsi which can be helpful.


When do you ever use these jumpers?

What does it mean to better supported? Seems to me they are the same
manufacturers with the same RMA procedures.


mostly via software management & diagnostics, modepages, etc.


I'm not sure. Promise makes SATA RAID boxes with a very nice set of
management tools. And 3Ware is also pretty good.


SATA will be better along these lines, but not yet really.

etc. and costs about the same.


Costs the same?


For example:
Raptor vs, say cheetah 10K.6


Seagate:
http://www.spacecentersystems.com/ca...products_id/94
37?refsrc=froogle
Raptor:
http://www.tritechcoa.com/product/067848.html

That's a $100 difference. It adds up if you need 50 of them.



3ware9500 vs LSI Megaraid

This is a reasonable comparison. More devices can be hooked up to the
LSI card.

The thing is I'm not sure that it actually happens that way. What I see
is that the actual electronics of the computer: motherboards, RAM, CPU
break down as well. I would say motherboard are most likely component
to fail and power supplies the least. Apparently, this is the opposite
of the accepted wisdom.


Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases.


It really is true! Really.

If I mention Dell, does that mean you tell me they use cheap crap mobos
and horrible cases?

The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.


I agree that it should be this way and everybody runs around telling
people it's true--you are telling us---and it makes sense and that it
probably *has been* true in the past. But today, things apparently have
changed. Moving parts are as robust if not more so than the electronics.

Perhaps we should consider applying RAID to motherboards, CPUs and
memory.
  #53  
Old April 14th 05, 06:44 AM
Thor Lancelot Simon
external usenet poster
 
Posts: n/a
Default

In article ,
flux wrote:
In article 1113415288.189491@smirk,
wrote:


exposed to those access patterns? I would guess that operating a Tivo
(at maybe a few MB/s, nearly completely sequential, large IOs) barely


Your guess is wrong.

during the night they are running data mining, analytics, and backup.
The access patterns are different, but they tend to be busy all the
time. Why? If they were not busy some time of the day, they are


There is no why because there probably isn't much difference.


And why, exactly, is that? Because you say so?

Doing sequential writes loads the mechanical and magnetic components of
a drive in an entirely different fashion than doing a random I/O load
including reads, writes, and long seeks. There is no rational reason to
assume that a continuous sequential-write load will cause the same type
of failures as a continuous random I/O load -- nor vice-versa.

As a trivial real-world example, in a continuous sequential-write load
like that of TiVo is is pretty much the case that every sector is
overwritten the same number of times. In a random I/O load through a
filesystem, blocks containing metadata will be rewritten many more times
than any _particular_ data block. The result of this is that if you have
a drive with cheap magnetic media in which individual sectors can only
be reliably overwritten a few tens or hundreds of thousands of times,
*which is in fact a common failure mode of some low-end desktop drives*,
the drive will fail much more quickly under the random I/O load than
under the TiVo load.

This is the sort of problem that those of us with actual experience with
large installations with many drives in servers with different workloads
have actually seen in practice; for example, I recently pulled every
Samsung SP040 out of an entire rack of servers because they had an
unacceptable failure rate even when paired in RAID 1 mirrors due to
exactly this issue. But, if I recall correctly, you're a college kid
posting from your dorm room; you have a lot of fancy theories and angry
talk but no actual experience with large installations in the field. I
see you've learned something about TiVo; did you also learn that DVR
manufacturers work closely with drive vendors to ensure that the drives
they ship have hardware and firmware carefully tuned for the particular
kind of continuous load that they require? One obvious example is that
many desktop drives do not depower the read electronics even when the
drive is not reading; this leads to increased wear, and decreased lifetime,
of the read head when the drive is placed in 100% duty cycle service; and
it is precisely the sort of thing that manufacturers tweak when tuning a
drive for a particular application (other such things involve using higher
quality magnetic media, as I mention above, changing load/unload behaviour,
and many more).

What _is_ nonsense is that all ATA (much less all SATA) drives are cheap
junk that can't be used in enterprise applications. Manufacturers make,
and warranty, a pretty good range of drives in both PATA and SATA now
that can be used in high duty cycle applications with a reasonable
degree of confidence, e.g. the Maxtor MAXline and WD Raptor and Raid
Edition drives. But these drives are most emphatically _not_ the same
in some ways as generic "desktop" ATA or SATA drives, and it is not
reasonable to say that all such drives will survive enterprise use,
though some may be built well enough to do so.

This also used to be true of SCSI drives; but the bottom dropped out of
the SCSI desktop drive market and left *only the high-end server
products* behind. The result is that though you can trust that _some_
particular SATA and ATA drives are designed, and tuned, for server use,
you can trust that just about all current SCSI drives are that way.
So there is a difference -- but it is not the difference most people
seem to think there is. Of course, it is also not the _lack_ of
difference you seem to insist there is; but since you seem to know
essentially nothing about anything, but enjoy talking very loud and
very often, that is not too surprising to me.

--
Thor Lancelot Simon


"The inconsistency is startling, though admittedly, if consistency is to be
abandoned or transcended, there is no problem." - Noam Chomsky
  #54  
Old April 14th 05, 06:49 AM
flux
external usenet poster
 
Posts: n/a
Default

In article 1113416697.696293@smirk,
(Ralph Becker-Szendy) wrote:

In article ,
Curious George wrote:
Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases. The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.


Furthermore, on real computers, power supplies and fans are all
redundant, and can be hot-swapped. Look at the back of a good


But those things never seem to go down. Motherboards, RAM, CPUs do and
apparently with notably more frequency!

This is where RAID comes in. But even that's pretty difficult. PATA
was not designed for hot-swap; the fact that some disk enclosures and
the 3ware cards can do it at all is a bit of a miracle. At least SATA
is designed for hot swap. All SCSI and FC drives sold today are
hot-swappable (that's why many of the SCSI drives are sold with the
80-pin connector that integrated data and power in one connector).


Wait, don't they need the SCA connector be hot-swappable? Most SCSI
drive do *not* have this connector. That would mean most of them are not
hot-swappable.

In summary, the disks are and remain the least reliably component of a
serious computer system.


In summary, disks, ATA or otherwise, seem to be really no less reliable
than any other component in the computer.
  #55  
Old April 14th 05, 08:28 AM
Thor Lancelot Simon
external usenet poster
 
Posts: n/a
Default

In article ,
flux wrote:
In article 1113416697.696293@smirk,
(Ralph Becker-Szendy) wrote:

In article ,
Curious George wrote:
Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases. The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.


Furthermore, on real computers, power supplies and fans are all
redundant, and can be hot-swapped. Look at the back of a good


But those things never seem to go down. Motherboards, RAM, CPUs do and
apparently with notably more frequency!


Apparently to _you_, perhaps. But all that does is confirm that you
plainly have very, very little experience -- at all, much less with
installations large enough to notice general trends.

Fans fail all the time. In fact, it's remarkably uncommon to see a
CPU or memory fail if some part of its cooling system, generally a
fan or its controller, hasn't failed first - either outright, or for
example by clogging up with dirt, a problem that the extra capacity
provided by redundant fans effectively addresses.

On the other hand I'm sure you've seen plenty of overclocked PC gamer
motherboards fail. That's right up your alley, isn't it?

--
Thor Lancelot Simon


"The inconsistency is startling, though admittedly, if consistency is to be
abandoned or transcended, there is no problem." - Noam Chomsky
  #56  
Old April 14th 05, 06:50 PM
_firstname_@lr_dot_los-gatos_dot_ca.us
external usenet poster
 
Posts: n/a
Default

In article ,
flux wrote:
In article 1113416697.696293@smirk,
(Ralph Becker-Szendy) wrote:

In article ,
Curious George wrote:
Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases. The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.


Furthermore, on real computers, power supplies and fans are all
redundant, and can be hot-swapped. Look at the back of a good


But those things never seem to go down. Motherboards, RAM, CPUs do and
apparently with notably more frequency!


Very funny. I don't know how many power supplies and fan trays I've
lugged from the shipping & receiving dock to my lab, unpacked them,
and installed them on running computers (or disk enclosures or RAID
arrays). Probably a few dozen. And my job is not field service or
maintenance at all, I just have to take care of my own machines. In
the same time, I had exactly one motherboard fail (and it is not
exactly a motherboard failure, the problem is in the connector to the
power supplies, and the motherboard keeps wrongly reporting that +12V
is out on one of the two power supplies, so the machine is essentially
running with non-redundant power supplies, at which point I declared
the MoBo to be not worth using. Oh, and I had one CPU fail (on a
4-way machine), but the machine kept running correctly with 3 CPUs for
several days (this was a high-end Unix machine, not an x86).
Unfortunately, it was an older model where replacing the CPU required
power cycling the machine (on newer models you can hot-swap CPUs and
memory too).

Dead disk drives? Dozens. Maybe hundreds. Don't care any more. I
keep a stack of spares on the shelf; when the stack runs low, I order
more spares.

In summary: Many many disk failures. Occasional power supply and fan
failures. Also occasionaly interconnect failures (like backplanes in
JBODs, or fibre channel GBICs and SFPs, but those are hot-swappable
and cheap). Very rarely computer failures. Note: all this data
pertains to enterprise-grade hardware (made by the big computer
companies with short names, all rack mounted, all installed in
well-cooled computer rooms with stable power). Depending on where you
get your computers from, YMMV.

At this point, I'm tired of arguing with Mr. flux. For example, he
asked for data that disks without queueing support handle complex
workloads very badly. It's not my job to reteach the fundamentals of
storage architecture to people who don't have the patience to find the
information themselves. Please read up on the ample literature on
disk drive performance. For starters, look at Ruemmler&Wilkes: "An
introduction to disk drive modeling" (over 10 years old, so look at
the method, not the conclusions). Then find modern papers that cite
it. Do a google or citeseer search for "elevator algorithm" or "disk
drive head scheduling". In the meantime, until you have spent the
effort acquiring knowledge, please don't go around simply shouting
"wrong" or "where's the beef". Thanks.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy

  #57  
Old April 14th 05, 07:41 PM
Alfred Falk
external usenet poster
 
Posts: n/a
Default

flux wrote in
:

In article ,
Curious George wrote:

What is the performance and what does it mean to choke? Are you
actually saying the hard drive starts slowing down and even stops
outright, crashes?


Performance drops off severely with multiple IO transactions/
demanding head use. More severely than I expected at least (my bad I
guess)


Let's see the data.

I don't see how it's more flexible. In fact, it's less flexible.
SCSI has to deal ID numbers and termination. This is irrelevant to
SATA. SATA also has slim cables.


Longer cable lengths,


Why do you need long cables?


As the other posters here say over and over: it's obvious you don't have
much experience. Longer cables so you can reach further in large
installations.

boxes. Controllers tend to be better.


What does that mean exactly?

numbers & termination are far from complicated. Most sata doesn't
allow remote or delay start and led comes from the controller instead
of the drive.


Is that really the case?

ALL PITA for integration. Many PPL also complain about
cable retention & other PITA issues.


What do you mean by cable retention? You can get locking SATA cables.

Probably a real difference is the number of devices that can be
connected a to a controller.

Most SATA drives lack normal
jumper options of scsi which can be helpful.


When do you ever use these jumpers?


Perhaps when you have more than one device on a bus?

What does it mean to better supported? Seems to me they are the same
manufacturers with the same RMA procedures.


mostly via software management & diagnostics, modepages, etc.


I'm not sure. Promise makes SATA RAID boxes with a very nice set of
management tools. And 3Ware is also pretty good.


SATA will be better along these lines, but not yet really.

etc. and costs about the same.

Costs the same?


For example:
Raptor vs, say cheetah 10K.6


Seagate:
http://www.spacecentersystems.com/ca...hp/products_id

/
94 37?refsrc=froogle
Raptor:
http://www.tritechcoa.com/product/067848.html

That's a $100 difference. It adds up if you need 50 of them.



3ware9500 vs LSI Megaraid

This is a reasonable comparison. More devices can be hooked up to the
LSI card.

The thing is I'm not sure that it actually happens that way. What I
see is that the actual electronics of the computer: motherboards,
RAM, CPU break down as well. I would say motherboard are most
likely component to fail and power supplies the least. Apparently,
this is the opposite of the accepted wisdom.


Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases.


It really is true! Really.

If I mention Dell, does that mean you tell me they use cheap crap
mobos and horrible cases?

The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper
environment, even under full load. Embedded systems are used in
harsh environments for a reason.


I agree that it should be this way and everybody runs around telling
people it's true--you are telling us---and it makes sense and that it
probably *has been* true in the past. But today, things apparently
have changed. Moving parts are as robust if not more so than the
electronics.


What evidence do you have that this is changed? I certainly don't see
any in my workplace.

Perhaps we should consider applying RAID to motherboards, CPUs and
memory.


Redundant systems are common in the high-end world.



--
----------------------------------------------------------------
A L B E R T A Alfred Falk
R E S E A R C H Information Systems Dept (780)450-5185
C O U N C I L 250 Karl Clark Road
Edmonton, Alberta, Canada
http://www.arc.ab.ca/ T6N 1E4
http://www.arc.ab.ca/staff/falk/
  #58  
Old April 15th 05, 12:34 AM
Curious George
external usenet poster
 
Posts: n/a
Default

Ah I see.

For moment I thought you were interested in /able to learn about this
subject. Instead you never intended to go past making everyone dance
over the farcical, naive, superficial idea: "my desktop ATA drive
works fine. What are you talking about? All drives have to be the
same except SCSI is a rip-off!"

There's always at least one in every group. What a surprise.
  #59  
Old April 15th 05, 02:22 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Thu, 14 Apr 2005 18:41:38 GMT, Alfred Falk
wrote:

flux wrote in
:

In article ,
Curious George wrote:

What is the performance and what does it mean to choke? Are you
actually saying the hard drive starts slowing down and even stops
outright, crashes?

Performance drops off severely with multiple IO transactions/
demanding head use. More severely than I expected at least (my bad I
guess)


Let's see the data.


Let's see you listen. The basis of such an observation has been
repeated ad nauseum. Let's see you test both.

I don't see how it's more flexible. In fact, it's less flexible.
SCSI has to deal ID numbers and termination. This is irrelevant to
SATA. SATA also has slim cables.

Longer cable lengths,


Why do you need long cables?


As the other posters here say over and over: it's obvious you don't have
much experience. Longer cables so you can reach further in large
installations.


Right. For convenient, external, modular DAS

boxes. Controllers tend to be better.


What does that mean exactly?

numbers & termination are far from complicated. Most sata doesn't
allow remote or delay start and led comes from the controller instead
of the drive.


Is that really the case?


Figures you don't know

ALL PITA for integration. Many PPL also complain about
cable retention & other PITA issues.


What do you mean by cable retention? You can get locking SATA cables.


I mean cable retention. The disks have fragile connectors and
generally nothing to lock into.

Probably a real difference is the number of devices that can be
connected a to a controller.


Yes & no. Can be connected and should be connected for optimal
performance are different ideas (& different amounts).

Most SATA drives lack normal
jumper options of scsi which can be helpful.


When do you ever use these jumpers?


Perhaps when you have more than one device on a bus?


I personally have had thoughts like: wouldn't it be great if x SATA
drive had a jumper to handle remote/delay start instead of using
controller software, or have a write-protect jumper, or be able to
specify disk cache setting on the drive when you have a dumb
controller, or connect the led directly to the drive. Just a
pipe-dream wish list though.

What does it mean to better supported? Seems to me they are the same
manufacturers with the same RMA procedures.

mostly via software management & diagnostics, modepages, etc.


I'm not sure. Promise makes SATA RAID boxes with a very nice set of
management tools. And 3Ware is also pretty good.


not just raid management but disk diagnostics & disk tuning.

SATA will be better along these lines, but not yet really.

etc. and costs about the same.

Costs the same?

For example:
Raptor vs, say cheetah 10K.6


Seagate:
http://www.spacecentersystems.com/ca...hp/products_id

/
94 37?refsrc=froogle
Raptor:
http://www.tritechcoa.com/product/067848.html

That's a $100 difference. It adds up if you need 50 of them.


Not if you shop around more. Also drives and controllers are not the
only cost of arrays, esp not arrays w' 50 spindles.

Also think of what a bitch it is to combine 50 spindles in a single,
quality built DAS array. With scsi that's a breeze. On a FCAL SAN
that's nothing as well.

You're still talking about things you have no familiarity with.

3ware9500 vs LSI Megaraid

This is a reasonable comparison. More devices can be hooked up to the
LSI card.

The thing is I'm not sure that it actually happens that way. What I
see is that the actual electronics of the computer: motherboards,
RAM, CPU break down as well. I would say motherboard are most
likely component to fail and power supplies the least. Apparently,
this is the opposite of the accepted wisdom.

Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases.


It really is true! Really.


Oh well if YOU say so.

If I mention Dell, does that mean you tell me they use cheap crap
mobos and horrible cases?


Since I'm sure you've only seen Dimensions then yes. By in large
Dells are nothing to get a hardon over.

The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper
environment, even under full load. Embedded systems are used in
harsh environments for a reason.


I agree that it should be this way and everybody runs around telling
people it's true--you are telling us---and it makes sense and that it
probably *has been* true in the past. But today, things apparently
have changed. Moving parts are as robust if not more so than the
electronics.


What evidence do you have that this is changed? I certainly don't see
any in my workplace.

Perhaps we should consider applying RAID to motherboards, CPUs and
memory.


Redundant systems are common in the high-end world.


It just shows his point of reference
  #60  
Old April 15th 05, 03:01 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 15 Apr 2005 01:22:22 GMT, Curious George wrote:

Also think of what a bitch it is to combine 50 spindles in a single,
quality built DAS array.


Speaking of which, any endorsements out there for any brand new gear
which support Serial ATA II Port Multiplier?
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate Warranties Jump to Five Years Ablang General 0 August 1st 04 02:43 AM
Seagate Redesigns Drives (with 73GB to 300GB capacities) Ablang General 0 May 23rd 04 04:01 AM
Western Digital, Maxtor or Seagate @drian Homebuilt PC's 26 October 20th 03 06:24 PM
Western Digital, Maxtor, Seagate - guess who spams? tiburón Homebuilt PC's 7 September 29th 03 11:19 PM
My Maxtor saga Steve Daly Storage (alternative) 25 August 4th 03 04:12 AM


All times are GMT +1. The time now is 02:40 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.