A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

fragmentation issue in Netapp



 
 
Thread Tools Display Modes
  #1  
Old January 28th 07, 07:10 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.

  #2  
Old January 28th 07, 07:57 PM posted to comp.arch.storage
Pete
external usenet poster
 
Posts: 8
Default fragmentation issue in Netapp

Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.


How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete
  #3  
Old January 29th 07, 02:23 AM posted to comp.arch.storage
Faeandar
external usenet poster
 
Posts: 191
Default fragmentation issue in Netapp

On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:

I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.

During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.

May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.

As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.

Please suggest how to handle the situation.



What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F
  #4  
Old January 29th 07, 03:21 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

Hello Pete,
around 300GB space is available in that volume which creats problem
but other volumes are able to handle that much of write with same
available space. If write reduce on that particular volume then filer
behaves very well.
Its FAS980c cluster filer so I don't think more powerfull is required.

what is the best way to find out how much fragmentation is there in a
particular volume or how to defrag it.
In a practice we run "wafl scan reallocate" for defrag but if check
with "wafl scan measure_layout" it doesn't makes much difference
before and after "wafl scan reallocate"
What do you think "reallocate start vol will help than "wafl scan
reallocate" because our DataOntap is 7.0.4

Another question whether bigger aggregate will ease the situaton
instead of the traditional volume ?

Raju Mahala

On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.How much free space do you have in the volume? The data you have - 30

million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -- Show quoted text -


  #5  
Old January 29th 07, 03:26 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

Its FAS980c cluster filer with two fibre gigabyte port configured in a
vif(multi type)
Only NFS operation nothing else. Plain general purpose data used in
VLSI industry.

I am not sure how to chek filesystem utilization ?
snapshot retention is for one week. Five nightly, one weekly, and
three hourly.

Raju Mahala

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.What model filer? Cluster or single? Network connection? Client

type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -- Show quoted text -


  #6  
Old January 31st 07, 03:29 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -

- Show quoted text -


can you tell how to find out whether volume is fragmented or not ?

  #7  
Old January 31st 07, 03:31 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -

- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?

  #8  
Old January 31st 07, 10:00 PM posted to comp.arch.storage
Pete
external usenet poster
 
Posts: 8
Default fragmentation issue in Netapp

Raju Mahala wrote:
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.
During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.
May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.
As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.
Please suggest how to handle the situation.

How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.

So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.

Pete- Hide quoted text -

- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


I think wafl_scan measure layout will do that, but I don't think that's
going to be the issue. The problem is more likely to be the 30 million
files you have to deal with.

  #9  
Old February 1st 07, 03:04 AM posted to comp.arch.storage
Faeandar
external usenet poster
 
Posts: 191
Default fragmentation issue in Netapp

On 31 Jan 2007 06:29:58 -0800, "Raju Mahala"
wrote:

On Jan 29, 6:23 am, Faeandar wrote:
On 28 Jan 2007 10:10:57 -0800, "Raju Mahala"
wrote:





I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.


During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.


May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.


As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.


Please suggest how to handle the situation.


What model filer? Cluster or single? Network connection? Client
type? CIFS or NFS? Application data or general purpose file serving?

Average percentage of file system utilization? Snapshot retention and
utilization?

Ontap is very resistant to fragmentation, more so than most file
systems so I would be suprised to find that to be the problem.

30m files in 1TB, while more than average is not on the lunatic fringe
by any means.

~F- Hide quoted text -

- Show quoted text -


can you tell how to find out whether volume is fragmented or not ?


Utilization would be determined by how full it is. What is the volume
at? If it's less than 80% then I highly doubt it's fragmented.
Possible but unlikely.

You can tell how many files per volume or qtree you have by putting in
a default quote; no hard limits just something to get reporting.

* tree@/vol/your_vol_here - -

Then turn on quotas, if not on already, and run a quota report.

~F

  #10  
Old February 1st 07, 05:47 AM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default fragmentation issue in Netapp

On Feb 1, 2:00 am, Pete wrote:
Raju Mahala wrote:
On Jan 28, 11:57 pm, Pete wrote:
Raju Mahala wrote:
I start with way of debugging. So can someone suggest how to debug
slow performance issue in NetApp filer. I mean what all I should check
before coming to any conclusion.
During slow performance I found deffered back-to-back-copy in
"sysstat" output and around 20% full stripe write in "statit"output.
May be it due to fragmentation in write block, fragmentation at file
level but how to conclude which is the problem and how to resolve.
As of now as a practice we run "wafl scan reallocate" on all volumes
and problem gets resolve for some time. we use traditional volume yet
and in our environment average file size is very less and also heavy
turnaround of files. For ex. 1TB volume has more that 30m files.
Please suggest how to handle the situation.
How much free space do you have in the volume? The data you have - 30
million files, average size ~33K, is very tough for any filesystem to
deal with. The deferred back to back CP is usually a symptom of an
overloaded system - not enough disk, memory, or both.


So, free up some space, add more disks, reduce the number of files you
have, or get a much bigger system.


Pete- Hide quoted text -


- Show quoted text -


can you tell me how to find out whether volume is fragmented or not ?


I think wafl_scan measure layout will do that, but I don't think that's
going to be the issue. The problem is more likely to be the 30 million
files you have to deal with.- Hide quoted text -

- Show quoted text -


I do wafl scan measure_layout but always gets value around 1.??
something and expected around 3-4 and its same on other filer's volume
those are working fine but have less no. of files and turnaround. Even
after "wafl scan reallocate" if I check by measure_layout almost
remains same. So I doubt on wafl scan.
After these many communication I feel problem is more or less with
more no of files with heavy turnaround.
Any comment ?

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Intermittent (power?) issue with HD Bob Storage (alternative) 2 April 4th 06 03:08 PM
which group should I got for speaker volume issue amanda Printers 3 March 30th 06 12:54 AM
Abit LG-81 random shutdown/heat issue? RetailMessiah General 2 January 25th 06 08:09 AM
USB 2.0 problem (Weird issue) Jman Homebuilt PC's 0 December 16th 04 08:18 AM
The endless loop issue again Tex Nvidia Videocards 5 July 23rd 03 03:56 PM


All times are GMT +1. The time now is 03:30 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.