View Single Post
  #5  
Old January 29th 07, 02:26 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

Its FAS980c cluster filer with two fibre gigabyte port configured in a
vif(multi type)
Only NFS operation nothing else. Plain general purpose data used in
VLSI industry.

I am not sure how to chek filesystem utilization ?
snapshot retention is for one week. Five nightly, one weekly, and
three hourly.

Raju Mahala

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.What model filer? Cluster or single? Network connection? Client

type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -- Show quoted text -