A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage (alternative)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Slow SCSI RAID5 Array...



 
 
Thread Tools Display Modes
  #1  
Old June 25th 07, 11:13 PM posted to comp.sys.ibm.pc.hardware.storage
NewMan
external usenet poster
 
Posts: 20
Default Slow SCSI RAID5 Array...

Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server. Then I migrated the data
back onto the allocated data share.

This is when the problems started. For some strange reason, access to
the disk is now just painfully slow. I have googled myself to death,
and tried various things which have made marginal improvements. Still
TOO SLOW.

So I happen to have Thecus N5200 NAS Box configured with 5 x 320GB
7200 RPM SATAII Seagates in a RAID6 array.

I copied some of the data onto a test partition on the NAS box, and
tried to access it. Seemed nice and snappy!

So I downloaded a copy of IOMeter to see what the write speed were
like all round.

My local computer maxes out at 13 ms MAX for a "write". The NAS box
maxes out at about 18 ms. The W2K server maxes out a 1500 ms?????

WTF???

I was blaming some of this on the various client computers involved,
but these tests were done on my own machine which is running
perfectly.

So what gives???

Why on earth is the rebuilt server running so damn slow???

Any and all suggestions would be greatly appreciated!
  #2  
Old June 26th 07, 04:22 AM posted to comp.sys.ibm.pc.hardware.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Slow SCSI RAID5 Array...

Previously NewMan wrote:
Hi,


I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.


When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.


A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server. Then I migrated the data
back onto the allocated data share.


This is when the problems started. For some strange reason, access to
the disk is now just painfully slow. I have googled myself to death,
and tried various things which have made marginal improvements. Still
TOO SLOW.


So I happen to have Thecus N5200 NAS Box configured with 5 x 320GB
7200 RPM SATAII Seagates in a RAID6 array.


I copied some of the data onto a test partition on the NAS box, and
tried to access it. Seemed nice and snappy!


So I downloaded a copy of IOMeter to see what the write speed were
like all round.


My local computer maxes out at 13 ms MAX for a "write". The NAS box
maxes out at about 18 ms. The W2K server maxes out a 1500 ms?????


WTF???


I was blaming some of this on the various client computers involved,
but these tests were done on my own machine which is running
perfectly.


So what gives???


Why on earth is the rebuilt server running so damn slow???


Any and all suggestions would be greatly appreciated!


I suspect there is some command failure, followed by a
timout and then followed by a retry with a different
command. Have you installed current drivers for the
RAID card and are the disks configured properly in the RAID
BIOS setup?

Arno
  #3  
Old June 26th 07, 12:48 PM posted to comp.sys.ibm.pc.hardware.storage
Kwyjibo
external usenet poster
 
Posts: 14
Default Slow SCSI RAID5 Array...


"NewMan" wrote in message
...
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server.


When you say you nuked the C drive, what did you do exactly? Delete/Format
or Fdisk and recreate your partition table?
If you totally wiped the partition table you have likely created new
partitions that are not alligned correctly with regard to your RAID stripe
size (which will result in substandard performance)

Have a look at the section entitled "Diskpar sample program" at
http://download.microsoft.com/downlo...ubsys_perf.doc

--
Kwyj.



  #4  
Old June 26th 07, 03:38 PM posted to comp.sys.ibm.pc.hardware.storage
NewMan
external usenet poster
 
Posts: 20
Default Slow SCSI RAID5 Array...

On 26 Jun 2007 03:22:39 GMT, Arno Wagner wrote:

Previously NewMan wrote:
Hi,


I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.


When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.


A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server. Then I migrated the data
back onto the allocated data share.


This is when the problems started. For some strange reason, access to
the disk is now just painfully slow. I have googled myself to death,
and tried various things which have made marginal improvements. Still
TOO SLOW.


So I happen to have Thecus N5200 NAS Box configured with 5 x 320GB
7200 RPM SATAII Seagates in a RAID6 array.


I copied some of the data onto a test partition on the NAS box, and
tried to access it. Seemed nice and snappy!


So I downloaded a copy of IOMeter to see what the write speed were
like all round.


My local computer maxes out at 13 ms MAX for a "write". The NAS box
maxes out at about 18 ms. The W2K server maxes out a 1500 ms?????


WTF???


I was blaming some of this on the various client computers involved,
but these tests were done on my own machine which is running
perfectly.


So what gives???


Why on earth is the rebuilt server running so damn slow???


Any and all suggestions would be greatly appreciated!


I suspect there is some command failure, followed by a
timout and then followed by a retry with a different
command. Have you installed current drivers for the
RAID card and are the disks configured properly in the RAID
BIOS setup?

Arno


I did find an updated driver for the card, as well as an updated
firmware for it. Installation of both did not affect performance at
all.

The RAID card bios shows the array state as "optimal", and the array
passes consistancy checks.

I also used Spinrite6 to test the array, and all shows as OK.

And the bult-in DELL boot partition diagnotics show all good as well.
  #5  
Old June 26th 07, 03:41 PM posted to comp.sys.ibm.pc.hardware.storage
NewMan
external usenet poster
 
Posts: 20
Default Slow SCSI RAID5 Array...

On Tue, 26 Jun 2007 21:48:27 +1000, "Kwyjibo"
wrote:


"NewMan" wrote in message
.. .
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server.


When you say you nuked the C drive, what did you do exactly? Delete/Format
or Fdisk and recreate your partition table?
If you totally wiped the partition table you have likely created new
partitions that are not alligned correctly with regard to your RAID stripe
size (which will result in substandard performance)

Have a look at the section entitled "Diskpar sample program" at
http://download.microsoft.com/downlo...ubsys_perf.doc


I used Volume Manager to delete the C: partition where the O/S was.
Then I recreated it and set it as active. I then did a clean
re-install of the O/S to the recreated C: partition. All other
partitions were left as is.

But I will read that document. I can use all the help I can get.
  #6  
Old June 26th 07, 08:46 PM posted to comp.sys.ibm.pc.hardware.storage
NewMan
external usenet poster
 
Posts: 20
Default Slow SCSI RAID5 Array...

On Tue, 26 Jun 2007 21:48:27 +1000, "Kwyjibo"
wrote:


"NewMan" wrote in message
.. .
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server.


When you say you nuked the C drive, what did you do exactly? Delete/Format
or Fdisk and recreate your partition table?
If you totally wiped the partition table you have likely created new
partitions that are not alligned correctly with regard to your RAID stripe
size (which will result in substandard performance)

Have a look at the section entitled "Diskpar sample program" at
http://download.microsoft.com/downlo...ubsys_perf.doc


GOT IT!

Thank you SOOO much for that document!

As I said somewhere else, I had found a firmware update for my
PERC4/SC card. As near as I can tell, when the firmware was flashed
into the card, while it DID maintain the array information to permit
reboot, the flash appears to have reset the various options of the
card - most noteably the WriteBack Cache!

I looked at the Dell Open-Manage application, and it told me write
caching was disabled.

I had to reboot and press ctrl-m to get into the config screen for
the Perc4, then hunt around a bit.

Once the WriteBack Cache was turned on, and the server re-booted it is
back to its snappy self!

Speed tests for my local Sata run at max write of 13, and the Nas Box
at 18 ms, the server now measures about 15 ms!

I know the Perc4 has no battery option, but I have a beefy APC UPS
that can hold the server up for almost 45 minutes, and I have the
PowerChute software to allow an orderly shut-down - so I don't think
there is too much risk!

This problem was really getting on everyone's nerves. Thanks so much
for your help!


  #7  
Old June 27th 07, 02:36 AM posted to comp.sys.ibm.pc.hardware.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Slow SCSI RAID5 Array...

Previously NewMan wrote:
On 26 Jun 2007 03:22:39 GMT, Arno Wagner wrote:


Previously NewMan wrote:
Hi,


I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.


When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.


A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server. Then I migrated the data
back onto the allocated data share.


This is when the problems started. For some strange reason, access to
the disk is now just painfully slow. I have googled myself to death,
and tried various things which have made marginal improvements. Still
TOO SLOW.


So I happen to have Thecus N5200 NAS Box configured with 5 x 320GB
7200 RPM SATAII Seagates in a RAID6 array.


I copied some of the data onto a test partition on the NAS box, and
tried to access it. Seemed nice and snappy!


So I downloaded a copy of IOMeter to see what the write speed were
like all round.


My local computer maxes out at 13 ms MAX for a "write". The NAS box
maxes out at about 18 ms. The W2K server maxes out a 1500 ms?????


WTF???


I was blaming some of this on the various client computers involved,
but these tests were done on my own machine which is running
perfectly.


So what gives???


Why on earth is the rebuilt server running so damn slow???


Any and all suggestions would be greatly appreciated!


I suspect there is some command failure, followed by a
timout and then followed by a retry with a different
command. Have you installed current drivers for the
RAID card and are the disks configured properly in the RAID
BIOS setup?

Arno


I did find an updated driver for the card, as well as an updated
firmware for it. Installation of both did not affect performance at
all.


The RAID card bios shows the array state as "optimal", and the array
passes consistancy checks.


Hmm.

I also used Spinrite6 to test the array, and all shows as OK.


That does not mean anything. SpinRite is an artefact of the
past and does nothing for modern disks.

And the bult-in DELL boot partition diagnotics show all good as well.


Maybe this is some conflict between the (a bit historic)
w2k and the controller or driver.

Can you test the card in a different computer, and/or with a
different OS?

Arno
  #8  
Old June 27th 07, 02:38 AM posted to comp.sys.ibm.pc.hardware.storage
Arno Wagner
external usenet poster
 
Posts: 2,796
Default Slow SCSI RAID5 Array...

Previously NewMan wrote:
On Tue, 26 Jun 2007 21:48:27 +1000, "Kwyjibo"
wrote:



"NewMan" wrote in message
. ..
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server.


When you say you nuked the C drive, what did you do exactly? Delete/Format
or Fdisk and recreate your partition table?
If you totally wiped the partition table you have likely created new
partitions that are not alligned correctly with regard to your RAID stripe
size (which will result in substandard performance)

Have a look at the section entitled "Diskpar sample program" at
http://download.microsoft.com/downlo...ubsys_perf.doc


GOT IT!


Thank you SOOO much for that document!


As I said somewhere else, I had found a firmware update for my
PERC4/SC card. As near as I can tell, when the firmware was flashed
into the card, while it DID maintain the array information to permit
reboot, the flash appears to have reset the various options of the
card - most noteably the WriteBack Cache!


Argh! That would do it!


I looked at the Dell Open-Manage application, and it told me write
caching was disabled.


I had to reboot and press ctrl-m to get into the config screen for
the Perc4, then hunt around a bit.


Once the WriteBack Cache was turned on, and the server re-booted it is
back to its snappy self!


Speed tests for my local Sata run at max write of 13, and the Nas Box
at 18 ms, the server now measures about 15 ms!


I know the Perc4 has no battery option, but I have a beefy APC UPS
that can hold the server up for almost 45 minutes, and I have the
PowerChute software to allow an orderly shut-down - so I don't think
there is too much risk!


This problem was really getting on everyone's nerves. Thanks so much
for your help!




Congratulations on figuring it out!

Arno
  #9  
Old June 27th 07, 07:22 PM posted to comp.sys.ibm.pc.hardware.storage
Folkert Rienstra
external usenet poster
 
Posts: 1,297
Default Slow SCSI RAID5 Array...

"Arno Wagner" wrote in message
Previously NewMan wrote:
On 26 Jun 2007 03:22:39 GMT, Arno Wagner wrote:
Previously NewMan wrote:
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so I
took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server. Then I migrated the data
back onto the allocated data share.

This is when the problems started. For some strange reason, access to
the disk is now just painfully slow. I have googled myself to death,
and tried various things which have made marginal improvements. Still
TOO SLOW.

So I happen to have Thecus N5200 NAS Box configured with 5 x 320GB
7200 RPM SATAII Seagates in a RAID6 array.

I copied some of the data onto a test partition on the NAS box, and
tried to access it. Seemed nice and snappy!

So I downloaded a copy of IOMeter to see what the write speed were
like all round.

My local computer maxes out at 13 ms MAX for a "write". The NAS box
maxes out at about 18 ms. The W2K server maxes out a 1500 ms?????

WTF???

I was blaming some of this on the various client computers involved, but
these tests were done on my own machine which is running perfectly.

So what gives???

Why on earth is the rebuilt server running so damn slow???

Any and all suggestions would be greatly appreciated!

I suspect there is some command failure, followed by a
timout and then followed by a retry with a different
command. Have you installed current drivers for the
RAID card and are the disks configured properly in the RAID
BIOS setup?

Arno


I did find an updated driver for the card, as well as an updated
firmware for it. Installation of both did not affect performance at
all.


The RAID card bios shows the array state as "optimal", and the array
passes consistancy checks.


Hmm.


You should really do something about that humming, babblebot.


I also used Spinrite6 to test the array, and all shows as OK.


That does not mean anything.


SpinRite is an artefact of the past and does nothing for modern disks.


Thanks for confirming that everything is OK then, babblebot.


And the bult-in DELL boot partition diagnotics show all good as well.


Maybe this is some conflict between the (a bit historic)
w2k and the controller or driver.


Yeah maybe. Or maybe it is a few hundred other things than that.
Or maybe his write cache is simply disabled. No, can't be that, obviously.


Can you test the card in a different computer, and/or with a
different OS?


To see if that changes the bad command good command
behaviour, babblebot? Yeah, that makes sense.
Keep trying, babblebot, you're obviously on a roll.


Arno

  #10  
Old June 27th 07, 07:24 PM posted to comp.sys.ibm.pc.hardware.storage
Folkert Rienstra
external usenet poster
 
Posts: 1,297
Default Slow SCSI RAID5 Array...

"Arno Wagner" wrote in message
Previously NewMan wrote:
On Tue, 26 Jun 2007 21:48:27 +1000, "Kwyjibo" wrote:
"NewMan" wrote in message ...
Hi,

I have a Dell 1600SC running Win2000 Server. This box has a PERC4/SC
RAID Card, and 3 x 73 GB SCSI 10,000 RPM drives configured for
Hardware RAID5.

When this server was originally deployed some 3 years ago, it was a
screamer. No complaints about disk access at all.

A few months ago, something on the O/S partition got corrupted, so
I took it off-line. I ran just about every disagnostic under the sun,
including the DELL diagnotics. Everything checks out OK. So, I nuked
the "C:" drive and reinstalled W2K server.

When you say you nuked the C drive, what did you do exactly? Delete/Format
or Fdisk and recreate your partition table?
If you totally wiped the partition table you have likely created new
partitions that are not alligned correctly with regard to your RAID stripe
size (which will result in substandard performance)

Have a look at the section entitled "Diskpar sample program" at
http://download.microsoft.com/downlo...ubsys_perf.doc


GOT IT!


Thank you SOOO much for that document!


As I said somewhere else, I had found a firmware update for my
PERC4/SC card. As near as I can tell, when the firmware was flashed
into the card, while it DID maintain the array information to permit
reboot, the flash appears to have reset the various options of the
card - most noteably the WriteBack Cache!


Argh! That would do it!


Glad you agree, babblebot.
Now that he has your sign of approval he can get to sleep and rest safely.


I looked at the Dell Open-Manage application, and it told me write
caching was disabled.


I had to reboot and press ctrl-m to get into the config screen for
the Perc4, then hunt around a bit.


Once the WriteBack Cache was turned on, and the server re-booted it is
back to its snappy self!


Speed tests for my local Sata run at max write of 13, and the Nas Box
at 18 ms, the server now measures about 15 ms!


I know the Perc4 has no battery option, but I have a beefy APC UPS
that can hold the server up for almost 45 minutes, and I have the
PowerChute software to allow an orderly shut-down - so I don't think
there is too much risk!


This problem was really getting on everyone's nerves. Thanks so much
for your help!




Congratulations on figuring it out!


And that despite your help, amazing, isn't it.
He must be almost as smart as you are.


Arno

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Network copied files are corrupted on Promise RAID5 array [email protected] Asus Motherboards 5 December 1st 05 10:02 PM
Increase System Volume on RAID5 Array John Dell Computers 1 June 3rd 05 08:23 PM
external raid5 array reporting bad blocks Nils Juergens Storage & Hardrives 0 October 1st 04 09:07 AM
external raid5 array reporting bad blocks Nils Juergens Storage (alternative) 0 September 30th 04 10:32 AM
replace HD of smart array 3200, RAID5, un-hotswappable, proliant 800 fshguo Compaq Servers 1 September 17th 04 04:20 AM


All times are GMT +1. The time now is 08:14 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.