A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

What am I doing wrong ??? Or is Adaptec 21610SA just a crappy RAID card ?



 
 
Thread Tools Display Modes
  #101  
Old December 5th 04, 11:12 PM
Jesper Monsted
external usenet poster
 
Posts: n/a
Default

Arno Wagner wrote in
:

[snip]
It is quite clear which one you want in a high-priced
server.


ATA for a CDROM and Fibre Channel for the rest, please.


--
/Jesper Monsted
  #102  
Old December 6th 04, 12:08 AM
Alexander Grigoriev
external usenet poster
 
Posts: n/a
Default

looks like MTBF about 100,000 (assuming 180 drives get replaced). No good.

"Jesper Monsted" wrote in message
.163...
"Maxim S. Shatskih" wrote in news:coq0t2$rs7$1
@gavrilo.mtu.ru:

No, this only means that each year 1 of 136 disks will fail


Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is
quite
a bit more than your one-in-136 a year.

--
/Jesper Monsted



  #103  
Old December 6th 04, 12:09 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Jesper Monsted wrote:

"Maxim S. Shatskih" wrote in news:coq0t2$rs7$1
@gavrilo.mtu.ru:

No, this only means that each year 1 of 136 disks will fail


Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is
quite a bit more than your one-in-136 a year.


(a) what is the MTBF on those drives
(b) if the failure rate is significantly greather than the rated MTBF would
suggest then have you tried to find out what is killing them?


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #104  
Old December 6th 04, 02:37 AM
flux
external usenet poster
 
Posts: n/a
Default

In article ,
"J. Clarke" wrote:

flux wrote:

This supports my assertion that gigabit is relatively recent.


How? By that reasoning keyboards are "relatively recent" because Dell's
laptops have them.


I think Dell always has shipped their laptops with keyboards :-) But
when did they start shipping with gigabit? Recently, no?

Well, dude, you design one and see how you make out.


They need to be designed?
  #105  
Old December 6th 04, 04:30 AM
Peter
external usenet poster
 
Posts: n/a
Default

Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is

quite
a bit more than your one-in-136 a year.


That could be because:
1. Your hard drives work harder
2. Power on Hours per year are higher
3. Ambient temperature is higher
4. Drives are getting older
than specified for MTBF measurement.


  #106  
Old December 6th 04, 08:08 AM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Peter wrote:

Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is

quite
a bit more than your one-in-136 a year.


That could be because:
1. Your hard drives work harder
2. Power on Hours per year are higher


One would hope that for enterprise-quality drives the design hours per year
would be 8766.

3. Ambient temperature is higher
4. Drives are getting older
than specified for MTBF measurement.


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #107  
Old December 6th 04, 04:23 PM
Jesper Monsted
external usenet poster
 
Posts: n/a
Default

"J. Clarke" wrote in
:

Jesper Monsted wrote:

"Maxim S. Shatskih" wrote in
news:coq0t2$rs7$1 @gavrilo.mtu.ru:

No, this only means that each year 1 of 136 disks will fail


Your calculations just don't match the real world, but what the hell


Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is
quite a bit more than your one-in-136 a year.


(a) what is the MTBF on those drives


Not quite sure, since it's a mix of whatever EMC had on the shelf at the
time. There are quite a few seagates (rated at 1,400,000 hours and
1,200,000 hours, depending on the model) and Ultrastars (which i can't find
the MTBF for, but assume it's about the same).

(b) if the failure rate is significantly greather than the rated MTBF
would suggest then have you tried to find out what is killing them?


Firmware. They are being replaced as soon as anything looks like it's going
to fail. Soft errors, S.M.A.R.T. readouts etc is all taken into
consideration and drives preemptively replaced.

They still managed to fail two in the same raidset, b0rking 18 TB of
datawarehouse, though.

--
/Jesper Monsted
  #108  
Old December 6th 04, 04:30 PM
Jesper Monsted
external usenet poster
 
Posts: n/a
Default

"Peter" wrote in news:31i5k1F3asuueU1
@individual.net:

Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is

quite
a bit more than your one-in-136 a year.


That could be because:
1. Your hard drives work harder


Sitting in a enterprise class storage system, they might, but that
shouldn't fail them that often.

2. Power on Hours per year are higher


24/7, just like they were designed for (i hope... who ever turns off
servers?)

3. Ambient temperature is higher


Properly cooled datacenter at about 22 degrees C. Exhaust air from the box
isn't even warm to the touch.

4. Drives are getting older
than specified for MTBF measurement.


About a year and a half now, but this has been the same since we turned on
the things.

As i noted in a different post, we (or rather, EMC) replace them as soon as
they act funny in any way, which probably means our definition of
"failure" is a bit different than the disc manufacturers...

I was told EMC scrapped a 100k unit shipment of ultrastars at one point -
there's a bad RMA for some support droid at IBM

--
/Jesper Monsted
  #109  
Old December 6th 04, 04:40 PM
Folkert Rienstra
external usenet poster
 
Posts: n/a
Default

"Peter" wrote in message
Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is
quite a bit more than your one-in-136 a year.


That could be because:
1. Your hard drives work harder


They are FC drives.

2. Power on Hours per year are higher


They are FC drives.

3. Ambient temperature is higher


They are FC drives.

4. Drives are getting older


Or at 146GB are still very young.

than specified for MTBF measurement.


146GB drives, older than 5 years? .
  #110  
Old December 6th 04, 06:17 PM
Thor Lancelot Simon
external usenet poster
 
Posts: n/a
Default

In article ,
Peter wrote:
Your calculations just don't match the real world, but what the hell

Out of 2200 or so 146GB FC drives, we replace 2-5 every week. This is

quite
a bit more than your one-in-136 a year.


That could be because:
1. Your hard drives work harder
2. Power on Hours per year are higher
3. Ambient temperature is higher
4. Drives are getting older
than specified for MTBF measurement.


Or it could be because unless you pay for an application-specific MTBF
guarantee (which you will probably be contractually bound to not
disclose), what you're working with is a measurement so fundamentally
tied to marketing purposes that it's basically not useful for any
technical purpose at all.

You could reasonably think of those 1,200,000 hour MTBF numbers as
being generated like this:

"We want to quote a million-hour MTBF. How many drives to we have to
run for a month to quote that? We only have a month left in the
development cycle before we have to have the marketing materials ready
for product announcement."

"Hm. I see it's a little under 2000 drives. Well, let's pull 2,000 of the
first production run (that pass the initial QA test, which will exclude DOA
units and units with the kind of obvious mechanical problems that will
kill them early; this sort of QA is often _reduced_ for later production
runs) and put them in a room for 1000 hours. If only one or two fail,
we'll claim a 1,000,000 hour MTBF and nobody will be able to sue us."

Of course very few units actually fail in the first month of use, in
the "enterprise drive" space where even moderately rigorous QA is done
on the production line. The whole key to the fraud is in manipulating
the total length of the MTBF test so that it *never hits* the actual
point in time when wear on any component might cause that component to
have a significant likelihood of causing a unit failure.

The moral of the story is that if you run your disk drives for a month
and then throw them away, you can feel reasonably confident in trusting
manufacturer MTBF numbers. Otherwise, though... caveat emptor.

--
Thor Lancelot Simon
Am I politic? Am I subtle? Am I a Machiavel?
-William Shakespeare
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
.cl3 / adaptec Lo Dolce Pesca General 0 April 10th 04 01:51 AM
Adaptec vs. Western Digital. Who is DEGRADED? Who is FOS? Brian General 0 January 13th 04 05:16 PM
What the heck did I do wrong? Fried my A7N8X Deluxe? Don Burnette Asus Motherboards 19 December 1st 03 06:41 AM
Can the Adaptec 3210S do RAID 1+5? Rick Kunkel Storage & Hardrives 2 October 16th 03 02:25 AM
Install Problems with an Adaptec 2400a RAID Controller! Starz_Kid General 1 June 24th 03 03:44 AM


All times are GMT +1. The time now is 07:50 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.