A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

ATA Reliability: Seagate, WD, Maxtor



 
 
Thread Tools Display Modes
  #41  
Old April 13th 05, 06:31 AM
Paul Rubin
external usenet poster
 
Posts: n/a
Default

Matthias Buelow writes:
The issue is not the platters rotating but head move (constant
seeking). Put an ordinary ATA disk in a busy newsserver and watch it
explode.


That makes me wonder how fast cheap ATA drives explode when installed
in busy RAID arrays. I wonder if the drive vendors take a beating on
warranty service on drives that have been in RAID's.
  #42  
Old April 13th 05, 06:39 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
Curious George wrote:

ATA is fast enough for light duty use but chokes in a stressfully
environment. ATA RAID is often downright pathetic.


SATA doesn't share this problem either. Is it that different or could it be that ATA is not as bad as you think?


I'm not taking about sharing the bus, just raw disk performance with
more complex/demanding usage than typical desktop usage patterns. If
you're thinking of raptors specifically state that. Raptors are not
typical SATA drives. They are closer to apples & apples comparison
but then the price is similar also to scsi while at the same time
being newer/less mature/less proven track record.

Right now I'm using/evaluating/testing Seagate 7200.8's in arrays.
Even though the synthetic benchmarks are basically lining up to what
they're supposed to be it still chokes very easily. Very
disappointing...


What is the performance and what does it mean to choke? Are you actually
saying the hard drive starts slowing down and even stops outright,
crashes?

I'm sorry but I don't see the sense of raptors and a good 3ware card
or whatever. I don't care whether performance & reliability is
competitive or not. 10K SCSI makes more sense to me. It's more
mature, more flexible, better supported, has a longer track record,


I don't see how it's more flexible. In fact, it's less flexible. SCSI
has to deal ID numbers and termination. This is irrelevant to SATA. SATA
also has slim cables.

What does it mean to better supported? Seems to me they are the same
manufacturers with the same RMA procedures.

etc. and costs about the same.


Costs the same? Where do you get SCSI drives this cheap?

1st gen 10k sata compared to 6th or
7th gen 10k scsi. Come on.


Isn't SATA supposed to be the successor to ATA-6? That would make SATA
7th generation? Now the technology is different from ATA somewhat, so
it's not exactly 7th generation, but it's hardly a first generation
drive. That would be old Winchesters, no?

hands down, but then you pay for that. But if you go through more ATA
drives in the lifetime of a machine, cluster, etc, (even a small


The thing is I'm not sure that it actually happens that way. What I see
is that the actual electronics of the computer: motherboards, RAM, CPU
break down as well. I would say motherboard are most likely component
to fail and power supplies the least. Apparently, this is the opposite
of the accepted wisdom.

Also, when a drive goes down, it could be an electronics failure. How's
that different from any other electronic component failure or is anyone
arguing that the ATA drives somehow get less reliable versions of these
components than other parts of a computer?
  #43  
Old April 13th 05, 06:51 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Wed, 13 Apr 2005 05:25:03 GMT, flux wrote:

In article ,
Matthias Buelow wrote:

flux writes:

Actually, it's not. Desktop drives are in use 24/7. Just check out video
recorders like Tivo. These things record video 24/7. AFAIK, they are
just ordinary ATA drives.


The issue is not the platters rotating but head move (constant
seeking). Put an ordinary ATA disk in a busy newsserver and watch it
explode.


That's probably just a fraction of the work a Tivo machine requires of it.


???
  #44  
Old April 13th 05, 07:23 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Wed, 13 Apr 2005 05:39:18 GMT, flux wrote:

In article ,
Curious George wrote:

ATA is fast enough for light duty use but chokes in a stressfully
environment. ATA RAID is often downright pathetic.

SATA doesn't share this problem either. Is it that different or could it be that ATA is not as bad as you think?


I'm not taking about sharing the bus, just raw disk performance with
more complex/demanding usage than typical desktop usage patterns. If
you're thinking of raptors specifically state that. Raptors are not
typical SATA drives. They are closer to apples & apples comparison
but then the price is similar also to scsi while at the same time
being newer/less mature/less proven track record.

Right now I'm using/evaluating/testing Seagate 7200.8's in arrays.
Even though the synthetic benchmarks are basically lining up to what
they're supposed to be it still chokes very easily. Very
disappointing...


What is the performance and what does it mean to choke? Are you actually
saying the hard drive starts slowing down and even stops outright,
crashes?


Performance drops off severely with multiple IO transactions/
demanding head use. More severely than I expected at least (my bad I
guess)

I'm sorry but I don't see the sense of raptors and a good 3ware card
or whatever. I don't care whether performance & reliability is
competitive or not. 10K SCSI makes more sense to me. It's more
mature, more flexible, better supported, has a longer track record,


I don't see how it's more flexible. In fact, it's less flexible. SCSI
has to deal ID numbers and termination. This is irrelevant to SATA. SATA
also has slim cables.


Longer cable lengths, better options for external esp more robust
boxes. Controllers tend to be better. Better EC/checksumming. ID
numbers & termination are far from complicated. Most sata doesn't
allow remote or delay start and led comes from the controller instead
of the drive. ALL PITA for integration. Many PPL also complain about
cable retention & other PITA issues. Most SATA drives lack normal
jumper options of scsi which can be helpful.

What does it mean to better supported? Seems to me they are the same
manufacturers with the same RMA procedures.


mostly via software management & diagnostics, modepages, etc.
Sometimes also vendor, etc.

SATA will be better along these lines, but not yet really.

etc. and costs about the same.


Costs the same?


For example:
Raptor vs, say cheetah 10K.6
3ware9500 vs LSI Megaraid

Where do you get SCSI drives this cheap?


anywhe
www.pricegrabber.com

1st gen 10k sata compared to 6th or
7th gen 10k scsi. Come on.


Isn't SATA supposed to be the successor to ATA-6? That would make SATA
7th generation? Now the technology is different from ATA somewhat, so
it's not exactly 7th generation, but it's hardly a first generation
drive. That would be old Winchesters, no?


Raptors are WD's first attempt at ES in a long time. There past scsi
drives left a lot to be desired IMHO. Raptor is the first attempt of
anyone at 10K ES SATA.

The interface was radically redesigned for SATA. SATA 1 essentially
specified a SATA-PATA bridge. True SATA is very new even though it
builds on the older protocols of scsi & ata.

Don't forget the ATA/ATAPI standards mean little to real world
products. There's a whole mess of stuff that makes it look good on
paper but which isn't implemented in products, or in some cases not
implemented well.

hands down, but then you pay for that. But if you go through more ATA
drives in the lifetime of a machine, cluster, etc, (even a small


The thing is I'm not sure that it actually happens that way. What I see
is that the actual electronics of the computer: motherboards, RAM, CPU
break down as well. I would say motherboard are most likely component
to fail and power supplies the least. Apparently, this is the opposite
of the accepted wisdom.


Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases. The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.

Also, when a drive goes down, it could be an electronics failure. How's
that different from any other electronic component failure or is anyone
arguing that the ATA drives somehow get less reliable versions of these
components than other parts of a computer?


posted a pretty good dissection
of the build differences. You should read that as well as the other
article cited.
  #45  
Old April 13th 05, 11:14 AM
Matthias Buelow
external usenet poster
 
Posts: n/a
Default

flux writes:

The issue is not the platters rotating but head move (constant
seeking). Put an ordinary ATA disk in a busy newsserver and watch it
explode.


That's probably just a fraction of the work a Tivo machine requires of it.


Care to explain how you reach this conclusion?

mkb.
  #46  
Old April 13th 05, 07:01 PM
_firstname_@lr_dot_los-gatos_dot_ca.us
external usenet poster
 
Posts: n/a
Default

In article ,
flux wrote:
In article ,
Matthias Buelow wrote:

flux writes:

Actually, it's not. Desktop drives are in use 24/7. Just check out video
recorders like Tivo. These things record video 24/7. AFAIK, they are
just ordinary ATA drives.


The issue is not the platters rotating but head move (constant
seeking). Put an ordinary ATA disk in a busy newsserver and watch it
explode.


That's probably just a fraction of the work a Tivo machine requires of it.


Can someone who has more patience than I do please explain the
difference between sequential and random access patterns? And give a
little lecture on the performance characteristics of disks when
exposed to those access patterns? I would guess that operating a Tivo
(at maybe a few MB/s, nearly completely sequential, large IOs) barely
stresses a slow ATA disk, whereas news servers tend to be disk-limited
with random IO and short IOs (you add disks until the disks are barely
capable of keeping up, so the disks are always close to being
overloaded).

I would like to add the following: If you watch a news server, you'll
find that it is quite busy 24x7. Matter-of-fact, it isn't clear that
in these days of global news distribution there is a quiet time at
night. This is also true of many corporate servers: During the day,
they are running transaction processing and web-driven workloads;
during the night they are running data mining, analytics, and backup.
The access patterns are different, but they tend to be busy all the
time. Why? If they were not busy some time of the day, they are
underutilized, and the workloads that can be rescheduled will be moved
to the underutilized time.

In contrast: In most households, nodoby will be watching on the Tivo
between midnight and 6AM, and little during the day (while everyone is
at work). Also, there is little stuff on TV worth recording in the
dark of the night and the middle of the day, so a Tivo is likely idle
about 50% of the time.

Note that I said "idle", not powered down. This is about actuator,
not about spindle.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #47  
Old April 13th 05, 07:13 PM
_firstname_@lr_dot_los-gatos_dot_ca.us
external usenet poster
 
Posts: n/a
Default

In article ,
Curious George wrote:
I'm not taking about sharing the bus, just raw disk performance with
more complex/demanding usage than typical desktop usage patterns. If
you're thinking of raptors specifically state that. Raptors are not
typical SATA drives. They are closer to apples & apples comparison
but then the price is similar also to scsi while at the same time
being newer/less mature/less proven track record.


That's exactly the point of the Anderson/Dykes/Riedel paper: There are
different kind of drives (namely enterprise and personal/desktop, with
some halfway crossover models). Then there are different interfaces
(for example ATA, SATA, SCSI, FC). There was a very strong
correlation between those two characteristics a few years ago: Cheap,
slow, large capacity, unreliable drives tended to be ATA, while
expensive, fast, smaller capacity, reliable drives tended to be FC.
The arrival of purported enterprise-grade ATA disks and of SATA disks
has muddled this strong correlation. But the correlation in the
marketplace does not HAVE to be true. Other than the lack of demand,
there is very little that prevents disk manufacturers from making
ultra-high-end enterprise drives with ATA or SATA interfaces, and
cheap consumer drives with FC interfaces. Clearly, there is logical
reasons for the lack of demand.

I'm sorry but I don't see the sense of raptors and a good 3ware card
or whatever. I don't care whether performance & reliability is
competitive or not. 10K SCSI makes more sense to me. It's more
mature, more flexible, better supported, has a longer track record,
etc. and costs about the same. 1st gen 10k sata compared to 6th or
7th gen 10k scsi. Come on.


Even though I very much agree with you, we have to admit that there
are rare exceptions where storage farms built out of inexpensive RAID
cards with cheap consumer-grade disks make a lot of sense. These tend
to be environments that are very large (so they can amortize the extra
management overhead of having to regularly replace failed disks),
require a heck of a lot of storage at low IO intensities, and can
tolerate and manage data loss. I know several examples of disks farms
that use thousands or tenthousands of ATA disks this way, typically
with exactly the 3ware cards you mentioned. But the bulk of the
industry will be using high-reliability disk arrays, constructed and
supported by high-end vendors, because the significantly higher
purchase and support cost is worth the saving in hassle, data loss,
and systems management.

In many businesses even marginal increases in reliability are a big
deal because of the massive costs of support, maintenance and of
service interruption. One's attitude depends on individual tolerance
of risk & fiddling around. ATA doesn't have to be totally unstable
garbage to be/seem unsuitable to many ppl & environments.


Again, as much as I agree in general, there are examples where support
and systems management are not relevant as cost factors. Hobbyists
are one example, there are others. In those environments, using
hard-to-manage components that need to be babied to prevent data loss
and need to be assembled from store-bought components is very
sensible. If I had a little more spare time, I might set up an ATA
RAID system at home with used WD drives and a 3ware card (if I can get
the hardware for cheap at a surplus store, I'm notoriously stingy).
In the meantime, I feel more comfortable with my 10K RPM SCSI drive
for the data I really care about. But I wouldn't even dream of
setting up such a system in my job, or for a serious customer that is
paying for storage.

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #48  
Old April 13th 05, 07:24 PM
Ralph Becker-Szendy
external usenet poster
 
Posts: n/a
Default

In article ,
Curious George wrote:
Not true, unless you are accustomed to buying cheap crap mobos and
horrible cases. The disks should normally be the first to go or at
least hickup followed by the fans & maybe PSU. It's the moving parts
that fail first. IC's, etc. last a long time in a proper environment,
even under full load. Embedded systems are used in harsh environments
for a reason.


Furthermore, on real computers, power supplies and fans are all
redundant, and can be hot-swapped. Look at the back of a good
rackmount computer sometime (in particular when looking at an
enterprise-style Unix machine, like a HP-UX or AIX box).

In contrast, disks are by their nature difficult to hot-swap, because
when you put the new disk in, it doesn't have any useful bits on it.
This is where RAID comes in. But even that's pretty difficult. PATA
was not designed for hot-swap; the fact that some disk enclosures and
the 3ware cards can do it at all is a bit of a miracle. At least SATA
is designed for hot swap. All SCSI and FC drives sold today are
hot-swappable (that's why many of the SCSI drives are sold with the
80-pin connector that integrated data and power in one connector).

But even after you hot-swap, the RAID controller has to do a lot of
work to put the useful bits back on the drive. During this
reconstruction period, the other drive(s) in the RAID group are going
to be heavily loaded, often to the detriment of the foreground
workload. Also, while the drive is removed, and while the new drive
is being rebuilt onto, you are running with no redundancy (less
redundancy if you were running with a RAID setup that tolerates
multiple failures, but those are still exceedingly rare outside of
high-end enterprise disk arrays).

In summary, the disks are and remain the least reliably component of a
serious computer system.

Now, obviously by buying crap components, you can make arbitrarily bad
systems. If I only used motherboards from the grab bin at the surplus
store, and power supplies that are cheap because they failed the
burn-in test at the manufacturer, my disk drives might actually look
good in contrast to this crap. Nobody who cares about their computers
would build a system that way (masochists excepted).

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #49  
Old April 13th 05, 07:51 PM
Jim Prescott
external usenet poster
 
Posts: n/a
Default

In article 1113415288.189491@smirk,
wrote:
In contrast: In most households, nodoby will be watching on the Tivo
between midnight and 6AM, and little during the day (while everyone is
at work). Also, there is little stuff on TV worth recording in the
dark of the night and the middle of the day, so a Tivo is likely idle
about 50% of the time.
Note that I said "idle", not powered down. This is about actuator,
not about spindle.


Tivo records 24x7. It works harder when you are watching since it is
also reading, but it is always recording.

I'm not stating an opinion on the larger thread here. I just think
Tivo might not be that meaningful in a discussion about disk
reliability. Tivo's disk needs to be quiet, not generate much heat and
have "enough" performance (ie having more than enough performance
doesn't really help it at all). Computers will typically have a very
different set of requirements.
--
Jim Prescott - Computing and Networking Group
School of Engineering and Applied Sciences, University of Rochester, NY
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate Warranties Jump to Five Years Ablang General 0 August 1st 04 02:43 AM
Seagate Redesigns Drives (with 73GB to 300GB capacities) Ablang General 0 May 23rd 04 04:01 AM
Western Digital, Maxtor or Seagate @drian Homebuilt PC's 26 October 20th 03 06:24 PM
Western Digital, Maxtor, Seagate - guess who spams? tiburón Homebuilt PC's 7 September 29th 03 11:19 PM
My Maxtor saga Steve Daly Storage (alternative) 25 August 4th 03 04:12 AM


All times are GMT +1. The time now is 05:03 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.