View Single Post
  #8  
Old January 31st 07, 09:00 PM posted to comp.arch.storage
Pete
external usenet poster
 
Posts: 8
Default fragmentation issue in Netapp

Raju Mahala wrote:
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.
During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.
May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.
As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.
Please suggest how to handle the situation.

How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -

- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


I think wafl_scan measure layout will do that, but I don't think that's
going to be the issue. The problem is more likely to be the 30 million
files you have to deal with.