HardwareBanter

HardwareBanter (http://www.hardwarebanter.com/index.php)
-   Storage & Hardrives (http://www.hardwarebanter.com/forumdisplay.php?f=30)
-   -   fragmentation issue in Netapp (http://www.hardwarebanter.com/showthread.php?t=144694)

Raju Mahala January 28th 07 06:10 PM

fragmentation issue in Netapp
 
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.


Pete January 28th 07 06:57 PM

fragmentation issue in Netapp
 
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.


How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete

Faeandar January 29th 07 01:23 AM

fragmentation issue in Netapp
 
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:

I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.



What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F

Raju Mahala January 29th 07 02:21 PM

fragmentation issue in Netapp
 
Hello Pete,
around 300GB space is available in that volume which creats problem
but other volumes are able to handle that much of write with same
available space. If write reduce on that particular volume then filer
behaves very well.
Its FAS980c cluster filer so I don't think more powerfull is required.

what is the best way to find out how much fragmentation is there in a
particular volume or how to defrag it.
In a practice we run "wafl scan reallocate" for defrag but if check
with "wafl scan measure_layout" it doesn't makes much difference
before and after "wafl scan reallocate"
What do you think "reallocate start vol will help than "wafl scan
reallocate" because our DataOntap is 7.0.4

Another question whether bigger aggregate will ease the situaton
instead of the traditional volume ?

Raju Mahala

On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.How much free space do you have in the volume? The data you have - 30

million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -- Show quoted text -



Raju Mahala January 29th 07 02:26 PM

fragmentation issue in Netapp
 
Its FAS980c cluster filer with two fibre gigabyte port configured in a
vif(multi type)
Only NFS operation nothing else. Plain general purpose data used in
VLSI industry.

I am not sure how to chek filesystem utilization ?
snapshot retention is for one week. Five nightly, one weekly, and
three hourly.

Raju Mahala

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.What model filer? Cluster or single? Network connection? Client

type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -- Show quoted text -



Raju Mahala January 31st 07 02:29 PM

fragmentation issue in Netapp
 
On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -

- Show quoted text -


can you tell how to find out whether volume is fragmented or not ?


Raju Mahala January 31st 07 02:31 PM

fragmentation issue in Netapp
 
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -

- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


Pete January 31st 07 09:00 PM

fragmentation issue in Netapp
 
Raju Mahala wrote:
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.
During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.
May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.
As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.
Please suggest how to handle the situation.

How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -

- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


I think wafl_scan measure layout will do that, but I don't think that's
going to be the issue. The problem is more likely to be the 30 million
files you have to deal with.


Faeandar February 1st 07 02:04 AM

fragmentation issue in Netapp
 
On 31 Jan 2007 06:29:58 -0800, "Raju Mahala"
wrote:

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -

- Show quoted text -


can you tell how to find out whether volume is fragmented or not ?


Utilization would be determined by how full it is. What is the volume
at? If it's less than 80% then I highly doubt it's fragmented.
Possible but unlikely.

You can tell how many files per volume or qtree you have by putting in
a default quote; no hard limits just something to get reporting.

* tree@/vol/your_vol_here - -

Then turn on quotas, if not on already, and run a quota report.

~F


Raju Mahala February 1st 07 04:47 AM

fragmentation issue in Netapp
 
On Feb 1, 2:00 am, Pete wrote:
Raju Mahala wrote:
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.
During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.
May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.
As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.
Please suggest how to handle the situation.
How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.


So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.


Pete- Hide quoted text -


- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


I think wafl_scan measure layout will do that, but I don't think that's
going to be the issue. The problem is more likely to be the 30 million
files you have to deal with.- Hide quoted text -

- Show quoted text -


I do wafl scan measure_layout but always gets value around 1.??
something and expected around 3-4 and its same on other filer's volume
those are working fine but have less no. of files and turnaround. Even
after "wafl scan reallocate" if I check by measure_layout almost
remains same. So I doubt on wafl scan.
After these many communication I feel problem is more or less with
more no of files with heavy turnaround.
Any comment ?



All times are GMT +1. The time now is 09:29 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com