A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

ATA Reliability: Seagate, WD, Maxtor



 
 
Thread Tools Display Modes
  #61  
Old April 15th 05, 05:30 AM
flux
external usenet poster
 
Posts: n/a
Default

In article 1113501013.318834@smirk,
wrote:

Furthermore, on real computers, power supplies and fans are all
redundant, and can be hot-swapped. Look at the back of a good


But those things never seem to go down. Motherboards, RAM, CPUs do and
apparently with notably more frequency!


Very funny. I don't know how many power supplies and fan trays I've
lugged from the shipping & receiving dock to my lab, unpacked them,
and installed them on running computers (or disk enclosures or RAID
arrays). Probably a few dozen. And my job is not field service or


Oh, it is very funny. For me, power supply failures are very rare. It's
the least likely hardware problem with a machine. I occasionally see a
noisy fans, but outright fan failure is probably even rarer than failed
power supply.

maintenance at all, I just have to take care of my own machines. In
the same time, I had exactly one motherboard fail (and it is not
exactly a motherboard failure, the problem is in the connector to the


Two motherboard failures just last week.

Dead disk drives? Dozens. Maybe hundreds. Don't care any more. I
keep a stack of spares on the shelf; when the stack runs low, I order
more spares.


To be fair, three failed drives in the last week, but for one failed
drive, the associated machine had a previously failed ethernet card and
for another failed drive, the associated machine previously had a failed
memory chip.

and cheap). Very rarely computer failures. Note: all this data
pertains to enterprise-grade hardware (made by the big computer
companies with short names, all rack mounted, all installed in
well-cooled computer rooms with stable power). Depending on where you


I'm describing ordinary desktops.

At this point, I'm tired of arguing with Mr. flux. For example, he
asked for data that disks without queueing support handle complex
workloads very badly.


No, I asked for data that ATA drives handle complex workloads badly.
  #63  
Old April 15th 05, 05:38 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
Alfred Falk wrote:

Why do you need long cables?


As the other posters here say over and over: it's obvious you don't have
much experience. Longer cables so you can reach further in large
installations.


Farther to where? The next floor?

When do you ever use these jumpers?


Perhaps when you have more than one device on a bus?


When do you have just one device on a bus?

I agree that it should be this way and everybody runs around telling
people it's true--you are telling us---and it makes sense and that it
probably *has been* true in the past. But today, things apparently
have changed. Moving parts are as robust if not more so than the
electronics.


What evidence do you have that this is changed? I certainly don't see
any in my workplace.


Actually, I'm not really sure it has changed. It could just be the idea
the high rate of power supply, fan and drive failures has always been a
myth.

Perhaps we should consider applying RAID to motherboards, CPUs and
memory.


Redundant systems are common in the high-end world.


Perhaps because motherboards, CPUs and memory fail so frequently.
  #64  
Old April 15th 05, 05:46 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
Curious George wrote:

Let's see the data.


Let's see you listen. The basis of such an observation has been
repeated ad nauseum. Let's see you test both.


I am asking for the observations.

I don't see how it's more flexible. In fact, it's less flexible.
SCSI has to deal ID numbers and termination. This is irrelevant to
SATA. SATA also has slim cables.

Longer cable lengths,

Why do you need long cables?


As the other posters here say over and over: it's obvious you don't have
much experience. Longer cables so you can reach further in large
installations.


Right. For convenient, external, modular DAS


So are they are on the next floor.

boxes. Controllers tend to be better.

What does that mean exactly?

numbers & termination are far from complicated. Most sata doesn't
allow remote or delay start and led comes from the controller instead
of the drive.

Is that really the case?


Figures you don't know


I know it isn't the case.


not just raid management but disk diagnostics & disk tuning.


That's something that RAID has always been missing.


Also think of what a bitch it is to combine 50 spindles in a single,
quality built DAS array. With scsi that's a breeze. On a FCAL SAN
that's nothing as well.


So it is the case with SATA.


Since I'm sure you've only seen Dimensions then yes. By in large
Dells are nothing to get a hardon over.


They are just computers like any other. Actually, Dell has automatic
onsite techs, so it behooves them to make a reliable box.


Redundant systems are common in the high-end world.


It just shows his point of reference


But it does seem to suggest that they are just like other computers,
prone to electronic failures.
  #65  
Old April 15th 05, 05:51 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
(Thor Lancelot Simon) wrote:

In article ,
flux wrote:
In article 1113415288.189491@smirk,
wrote:


exposed to those access patterns? I would guess that operating a Tivo
(at maybe a few MB/s, nearly completely sequential, large IOs) barely


Your guess is wrong.

during the night they are running data mining, analytics, and backup.
The access patterns are different, but they tend to be busy all the
time. Why? If they were not busy some time of the day, they are


There is no why because there probably isn't much difference.


And why, exactly, is that? Because you say so?

Doing sequential writes loads the mechanical and magnetic components of
a drive in an entirely different fashion than doing a random I/O load
including reads, writes, and long seeks. There is no rational reason to
assume that a continuous sequential-write load will cause the same type
of failures as a continuous random I/O load -- nor vice-versa.

As a trivial real-world example, in a continuous sequential-write load
like that of TiVo is is pretty much the case that every sector is
overwritten the same number of times. In a random I/O load through a
filesystem, blocks containing metadata will be rewritten many more times
than any _particular_ data block. The result of this is that if you have
a drive with cheap magnetic media in which individual sectors can only
be reliably overwritten a few tens or hundreds of thousands of times,
*which is in fact a common failure mode of some low-end desktop drives*,
the drive will fail much more quickly under the random I/O load than
under the TiVo load.

This is the sort of problem that those of us with actual experience with
large installations with many drives in servers with different workloads
have actually seen in practice; for example, I recently pulled every
Samsung SP040 out of an entire rack of servers because they had an
unacceptable failure rate even when paired in RAID 1 mirrors due to
exactly this issue. But, if I recall correctly, you're a college kid
posting from your dorm room; you have a lot of fancy theories and angry
talk but no actual experience with large installations in the field. I
see you've learned something about TiVo; did you also learn that DVR
manufacturers work closely with drive vendors to ensure that the drives
they ship have hardware and firmware carefully tuned for the particular
kind of continuous load that they require? One obvious example is that
many desktop drives do not depower the read electronics even when the
drive is not reading; this leads to increased wear, and decreased lifetime,
of the read head when the drive is placed in 100% duty cycle service; and
it is precisely the sort of thing that manufacturers tweak when tuning a
drive for a particular application (other such things involve using higher
quality magnetic media, as I mention above, changing load/unload behaviour,
and many more).

What _is_ nonsense is that all ATA (much less all SATA) drives are cheap
junk that can't be used in enterprise applications. Manufacturers make,
and warranty, a pretty good range of drives in both PATA and SATA now
that can be used in high duty cycle applications with a reasonable
degree of confidence, e.g. the Maxtor MAXline and WD Raptor and Raid
Edition drives. But these drives are most emphatically _not_ the same
in some ways as generic "desktop" ATA or SATA drives, and it is not
reasonable to say that all such drives will survive enterprise use,
though some may be built well enough to do so.

This also used to be true of SCSI drives; but the bottom dropped out of
the SCSI desktop drive market and left *only the high-end server
products* behind. The result is that though you can trust that _some_
particular SATA and ATA drives are designed, and tuned, for server use,
you can trust that just about all current SCSI drives are that way.
So there is a difference -- but it is not the difference most people
seem to think there is. Of course, it is also not the _lack_ of
difference you seem to insist there is; but since you seem to know
essentially nothing about anything, but enjoy talking very loud and
very often, that is not too surprising to me.


I agree with you except where you don't agree with yourself.
  #66  
Old April 15th 05, 06:12 AM
Paul Rubin
external usenet poster
 
Posts: n/a
Default

flux writes:
I'm describing ordinary desktops.


Aha. Everyone else is describing high volume storage servers, not
desktops.
  #67  
Old April 15th 05, 12:10 PM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 15 Apr 2005 04:46:01 GMT, flux wrote:

They are just computers like any other.


Wow if only I knew this sooner! My company & I would have saved all
this money on eMachines, or Acers or maybe some really old Packard
Bells!

From now on I'm just going to find the cheapest pee cee I can find!
With some extra ATA disks I could make a datacenter! The base models
will be fine for every thing I want to do and every file that is
important to me (you know like maps & cheat codes).

All I need now is a kit for a windows & a neon light! No wait, they
sell those already done. Kewl!
  #68  
Old April 15th 05, 06:44 PM
Ralph Becker-Szendy
external usenet poster
 
Posts: n/a
Default

In article ,
flux wrote:
In article ,
Alfred Falk wrote:

Why do you need long cables?


As the other posters here say over and over: it's obvious you don't have
much experience. Longer cables so you can reach further in large
installations.


Farther to where? The next floor?


The smallest computer room of mine in the last 8 or so years had 6
racks worth of equipment. The largest has maybe 40 or 50 racks (I
haven't counted, since it is shared with a few other people, I know
which racks are mine). This doesn't count a large computer room where
I was only in charge of a small part (my part was 4 racks, but the
whole room must have had several hundred racks worth, as big as 2
football fields; rumor has it that it had over 3000 servers in it).

Try connecting a few hundred disks in 4 or 5 racks with SATA cables.
It is completely insane. Then try to do the same thing with FC (using
a mix of fabric and FC-AL, to reduce the number of expensive brocade's
that a necessary). Now it is a heck of a lot of work, but doable.
Now try to do the same thing with little clusters: A handful RAID
servers, each with a half dozen JBODs connected via SCSI or FC-AL, and
then interconnected via gigE to a central network hub that connects
via 10gigE to the backbone. Now it is actually doable (even then just
the wiring took us several days). Even with such an installation
(where the disks are within 2 feet of their attachment point), SATA
wouldn'y work, because every RAID server is connected to about 40
disks, and you can't in practice connect 40 SATA cables to a single
box, whereas it is easy to connect 3 or 4 FC or USCSI cables.

Again, you might think that SATA cables are easy to work with because
they are small, flexible, and have convenient little connectors.
That's true - compared to PATA or to 50-pin SCSI. In the world of
large systems, compare that to FC with SFF connectors (these are the
2-strand optical cables with connectors that look a little like
modular plugs, with two optical interfaces sticking out the front):
The FC cables are much smaller than SATA cables, are more flexible,
and take much better to being bundled by the dozen. The SFF connector
is a little smaller than the SATA connector, and it locks solidly
(which means the connections will stay put if someone jiggles the
cable bundle when doing maintenance).

In a moderate size computer room (many hundreds of disks, dozens or
low hundreds of servers, with RAID controllers in between), you need a
heck of a lot of 5m and 25m (15 foot and 75 foot) cables, just to
reach stuff within the room. A 2m = 6 foot cable is usually good only
within a rack, and if you are really lucky, it can interconnect two
neighboring racks through the hollow floor or in the overhead tray
(but only if the thing being connected is near the bottom or the top
of the rack). SATA is just out for this kind of interconnect.

I'm sorry to harp on this point, but SATA is ONLY a useful technology
for connecting disks within a box, or between a box and an extension
cabinet that's right next to a box. SATA is no longer useful in
practice once the installation reaches the size of multiple racks.
But as a replacement for PATA, for computers or RAID servers with
between 1 and 4 disks, it is an excellent improvement (which is why I
have a computer with internal SATA and two external SATA ports next to
my desk, but not in the computer room).

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #70  
Old April 16th 05, 08:20 PM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
Paul Rubin wrote:

flux writes:
I'm describing ordinary desktops.


Aha. Everyone else is describing high volume storage servers, not
desktops.


Which by everyone's description are just as prone to failure. Otherwise,
why would one have redundant components?
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate Warranties Jump to Five Years Ablang General 0 August 1st 04 02:43 AM
Seagate Redesigns Drives (with 73GB to 300GB capacities) Ablang General 0 May 23rd 04 04:01 AM
Western Digital, Maxtor or Seagate @drian Homebuilt PC's 26 October 20th 03 06:24 PM
Western Digital, Maxtor, Seagate - guess who spams? tiburón Homebuilt PC's 7 September 29th 03 11:19 PM
My Maxtor saga Steve Daly Storage (alternative) 25 August 4th 03 04:12 AM


All times are GMT +1. The time now is 06:56 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.