A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

What am I doing wrong ??? Or is Adaptec 21610SA just a crappy RAID card ?



 
 
Thread Tools Display Modes
  #61  
Old December 3rd 04, 02:34 PM
Nik Simpson
external usenet poster
 
Posts: n/a
Default

flux wrote:


In an ordinary office environment, how would backups get accomplished if
the computers are running 24/7?


So your experience of normal office environments is clearly limited if
you don't understand that systems stay on even during backup, shock
horror, pictures at 10.

--
Nik Simpson
  #62  
Old December 3rd 04, 03:33 PM
Maxim S. Shatskih
external usenet poster
 
Posts: n/a
Default

What do those numbers actually mean? 1,200,000 hours is 136 years.

So this number taken at face value is pretty silly because it's
essentially saying it won't be until sometime in 22nd century before
just first SCSI hard disk anywhere on Earth fails!


No, this only means that each year 1 of 136 disks will fail

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation

http://www.storagecraft.com


  #63  
Old December 3rd 04, 07:51 PM
J. Clarke
external usenet poster
 
Posts: n/a
Default

flux wrote:

In article ,
Malcolm Weir wrote:

Now, has it dawned on you that even the most rudimentary of network
servers has multiple NICs? Why do you think that is? Are server
manufacturers silly?


That's a very recent developlment. Even gigabit is relatively recent.

I strongly suspect that all your experience has been with the trivial
case, where you have (at most) a few file-sharing clients on a
network. In these case, you are right. But there's no money in that
market, since any fool can build such a system.


What other market is there?


Are you really this ignorant?

Where *hard* problems are, at least for those of us in
comp.arch.storage, it is assumed that the network problem is already
solved. Need 10GB/sec of network bandwidth and don't have a 10G
Ethernet? Simply trunk 10 1000BaseT nets to your switch! Cisco (and
the like) can handle that part of the problem.


Again, this sounds very rare. Where are there disks fast enough to
saturate this much Ethernet?


Are you familiar with the concept of "RAID"?

--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #64  
Old December 3rd 04, 08:28 PM
Anton Rang
external usenet poster
 
Posts: n/a
Default

flux writes:
In article , Anton Rang
wrote:

SATA disks typically have less error checking internally than SCSI,


How do you know this?


*points out the window to the Seagate office down the road*

I work in storage; I talk with drive engineers (and RAID engineers).

-- Anton
  #65  
Old December 3rd 04, 08:43 PM
Anton Rang
external usenet poster
 
Posts: n/a
Default

"J. Clarke" writes:
SCSI supports disconnects (parallel work of several drives on the same
cable)


SATA supports one drive per cable, so how would this be useful with SATA?


Disconnect is a requirement for tagged queueing (otherwise you don't
get a chance to issue the other command, and the drive doesn't get to
transfer data for commands out-of-order). (It's also useful in the
SCSI shared bus environment, of course.)

I see. So what specific properties make SCSI command queuing superior to
both the command queuing methods available with SATA?


The original queueing method took an extra interrupt per I/O and added
lots of overhead for each command (according to Intel, anyway, I never
looked at that spec). The "Native Command Queueing II" is supposed to
be better.

A few differences I see immediately in looking at the spec --

SCSI command queueing supports 256 outstanding commands per target.
ATA command queueing supports 32 outstanding commands per target.

SCSI disconnect allows data to be transferred out-of-order (for instance,
start sending data at the sector under the drive head, then go back to
fill in the preceding sectors as the disk rotates back to them). This
can reduce latency, particularly for small multi-sector transfers.
ATA disconnect requires data to be transferred in-order.

SCSI command queueing supports an ordering model which allows the host
to specify high-priority commands, or commands whose order must be
maintained (important for databases). ATA command queueing does not
(hence ordered writes cannot use queueing).

SCSI command queueing allows commands to be aborted. It's not obvious
to me whether ATA command queueing allows this or not.

-- Anton
  #66  
Old December 3rd 04, 11:34 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Fri, 03 Dec 2004 07:22:31 GMT, flux wrote:

In article ,
Malcolm Weir wrote:

Ask any marketing professional about "take up" rates. For any offer,
service, or program that a manufacturer provides, some proportion of
customers won't take advantage of it even when they could. Sometimes
this is because they lose necessary documentation, other times because
they forget, and still more because they don't care about replacing
the failed unit with another equivalent unit (e.g. if you're going
through the hassle of replacing the thing, why not upgrade at the same
time?)


Or it could simply be the case that the drives are more reliable than
you believe.


Well, my beliefs are based on experience and direct conversations with
disk drive manufacturers.

What are yours based on?

A logical rebuttal might be that manufacturers could offer lifetime
warranties on SCSI drives because they are just that durable, but a
warranty that long doesn't make sense from a marketing point of view
because the manufacturers do want their customers to upgrade eventually.


You call *that* "logical"?


yes.


Figures.

It isn't.

Drives have a service life which is related to the MTBF, but is
different from it.

Here's a scenario that is, hopefully, simple enough even for you:

Taking your 1.2Mhour MTBF, that might mean:

Year 1: 1 out of every 150 drives fails. 95% of failed drives get
returned for replacement. Cost of replacement = 100% cost of new
drive.
Year 2: 1 out of every 146 drives fails. 90% of failed drives get
returned for replacement. Cost of replacement = 90% cost of new
drive.
Year 3: 1 out of every 142 drives fails. 85% get returned. Cost of
replacement = 80% cost of new drive.
Year 4: 1 out of every 135 drives fails. 75% get returned. Cost of
replacement = 60% of cost of new drive.
Year 5: 1 out of every 100 drives fails. 50% get returned. Cost of
replacement = 40% of cost of new drive.
Year 6: No one cares. 0% get returned. Cost of replacement n/a.

The numbers are, of course, entirely fictional, but they *are*
representative of what happens.

You are probably confused why the "cost of replacement" (to the
manufacturer) falls over time. There are two main reasons for this:
amortization of development cost over time versus the production
costs. If a manufacturer decides that a given drive has an effective
saleable lifespan of, say, 2 years, then *all* the development costs
have to be recovered in that time, since they won't be selling many
after that period. (They'll likely be selling a similar model, but it
won't be the same disk. Take the disk in a 20GB Ipod, which is either
a Toshiba MK2003GAL or MK2004GAL. Same functional specs, but the
latter is later, obviously).

The second reason why the replacement cost falls is that if you
replace a disk having a 5 year warranty after 2 years, the replacement
only carries a 3 year warranty.

Do you really believe that the same proportion of people take
manufacturers up on the warranty after (say) 3 years as do after 1
month?


No, they probably upgrade.


Or.... can't find the paperwork/remember that they have a warranty...

But wait didn't someone just say the cost of
upgrading is peanuts compared to the cost of downtime.


Yes, it is. Welcome to the point. I hope you'll be very happy
together.

The cost of downtime dwarfs the cost of the upgrade. Just as the cost
of installing cabling dwarfs the cost of the cable. So if you're
going to mess around with doing either, you may as well install the
more expensive while you're at it!

One positive note from this extremely silly thread: I went and
discovered that a little dead notebook drive that I bought two years
ago still has a warranty. So it's off to Hitachi with it!

(It was replaced several months ago. I just hadn't got around to
tossing it... luckily!)

Malc.
  #67  
Old December 3rd 04, 11:35 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Fri, 03 Dec 2004 07:19:38 GMT, flux wrote:

In article ,
"Nik Simpson" wrote:

But they are basing their warranty calculations on how the drive is used,
and (with the exception of WD's 10K drives) they expect them to go into PC
devices which don't run 24x7, so the MTBF is expected to be stretched
because the drive is spending a good deal of its time doing very little or
powered down.


The Tivo I have attached to my TV streams video to disk 24/7. That's a
consumer appliance!


Yes. What's your point?

Do you think that *every* Tivo does that?

In an ordinary office environment, how would backups get accomplished if
the computers are running 24/7?


A good question. One that professionals have been dealing with for
decades.

We've solved it.

Malc.
  #68  
Old December 3rd 04, 11:40 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Fri, 03 Dec 2004 07:14:51 GMT, flux wrote:

In article ,
Malcolm Weir wrote:

Now, has it dawned on you that even the most rudimentary of network
servers has multiple NICs? Why do you think that is? Are server
manufacturers silly?


That's a very recent developlment. Even gigabit is relatively recent.


1999.

Yes, compared to the development of (say) the microprocessor,
"relatively recent". But compared to the service life of (say) a disk
drive, it was (literally) a lifetime ago.

I strongly suspect that all your experience has been with the trivial
case, where you have (at most) a few file-sharing clients on a
network. In these case, you are right. But there's no money in that
market, since any fool can build such a system.


What other market is there?


Commercial data processing, government, and scientific probably covers
most of the dollars...

Where *hard* problems are, at least for those of us in
comp.arch.storage, it is assumed that the network problem is already
solved. Need 10GB/sec of network bandwidth and don't have a 10G
Ethernet? Simply trunk 10 1000BaseT nets to your switch! Cisco (and
the like) can handle that part of the problem.


Again, this sounds very rare.


Yet it isn't. Gosh. Could it be that you are ignorant of what you
write?

What do you do for a living?

Where are there disks fast enough to
saturate this much Ethernet?


EMC, HDS, HP, LSI Logic will happily provide them for you!

Malc.
  #69  
Old December 3rd 04, 11:42 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Fri, 03 Dec 2004 07:10:02 GMT, flux wrote:

In article ,
Malcolm Weir wrote:

On Thu, 02 Dec 2004 07:13:57 GMT, flux wrote:

In article ,
"J. Clarke" wrote:

For enterprise storage replacing drives every two years would be very
costly. The price of the drives is peanuts compared to the cost of
downtime.

This seems to imply nobody ever buys new equipment.


No, it doesn't. It implies that enterprises would rather replace
drives every three years, not every two, and would rather replace them
every four years than every three, etc.


How is three years any signficantly less costly than two?


Did you flunk elementary math?

Here's the answer:

In a 6 year period, how often will you have to replace the disks if
you it:

(a) Every two years?
(b) Every three years?

I think you're a troll.

And ignorant!

Malc.
  #70  
Old December 3rd 04, 11:44 PM
Malcolm Weir
external usenet poster
 
Posts: n/a
Default

On Fri, 03 Dec 2004 07:07:46 GMT, flux wrote:

In article ,
"Peter" wrote:

You are completely wrong (did you ever studied statistics?).

Reread what I wrote carefully, and you will see that is quite correct.


Yes, I did. You have said:
"So this number taken at face value is pretty silly because it's
essentially saying it won't be until sometime in 22nd century before
just first SCSI hard disk anywhere on Earth fails!"

No your understanding is NOT correct, MTBF number does not imply that!


No, you are still misunderstanding. I was *intentionally* reading it as
a literal value.


You could *intentionally* read it as a phone number.

You'd be stupid to do so.

You could *intentionally* read it as the supply voltage, in volts.

You'd be *very* stupid to do so.

Or you could read it as an MTBF, which is what it is, and says it is,
and which is the only "face value" worth considering.

But you appear too stupid to do so.

Malc.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
.cl3 / adaptec Lo Dolce Pesca General 0 April 10th 04 01:51 AM
Adaptec vs. Western Digital. Who is DEGRADED? Who is FOS? Brian General 0 January 13th 04 05:16 PM
What the heck did I do wrong? Fried my A7N8X Deluxe? Don Burnette Asus Motherboards 19 December 1st 03 06:41 AM
Can the Adaptec 3210S do RAID 1+5? Rick Kunkel Storage & Hardrives 2 October 16th 03 02:25 AM
Install Problems with an Adaptec 2400a RAID Controller! Starz_Kid General 1 June 24th 03 03:44 AM


All times are GMT +1. The time now is 12:06 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.