A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

inode usage / file



 
 
Thread Tools Display Modes
  #11  
Old November 17th 07, 02:16 AM posted to comp.arch.storage
zorba
external usenet poster
 
Posts: 3
Default inode usage / file

On Nov 10, 3:10 am, "Edwin Cooke" wrote:
In message . com,

zorba parinay(at)gmail.com wrote:

Hi,
I have a 1.4 TB Volume exported over NFS, accessed from a Linux
NFS(v3) client. I have to fill all indoes on this volume. One way of
doing it is to create so many files which is time consuming . One more
option to use is Snapmirror/Replication technologies to create this
Data set. Anything else more efficient than either of this ?
Can there be more than one inode per file ? I mean if I create a file
of 100gb for example, will it utilize only one inode or inode +
indirect inodes = total inode count ? And How to see per file inode
usage on linux ?


Please note I am talking about Netapp Wafl FS here.


and in message ,
"the wharf rat" wrat(at)panix.com replied
|
| Hi,
| I have a 1.4 TB Volume exported over NFS, accessed from a Linux
| NFS(v3) client. I have to fill all indoes on this volume. One way of
| doing it is to create so many files which is time consuming . One more
|
| [suggested script omitted]
|
| Wafl shoudn't have any problem with the pathologically large
| directory you'll end up with...

Wharf Rat, you must be joking, right?

WAFL's performance will degrade for very large dirctories.

In message ,
Bakul Shah usenet(at)bitblocks.com wrote



}
} zorba wrote:
}
} I have a 1.4 TB Volume exported over NFS, accessed from a Linux
} NFS(v3) client. I have to fill all indoes on this volume. One way of
} doing it is to create so many files which is time consuming . One more
} option to use is Snapmirror/Replication technologies to create this
} Data set. Anything else more efficient than either of this ?
}
} Write a script. In /bin/sh:
}
} x=0; while touch $x; do x=$(($x + 1)); done
}
} Can there be more than one inode per file ? I mean if I create a file
} of 100gb for example, will it utilize only one inode or inode +
} indirect inodes = total inode count ? And How to see per file inode
} usage on linux ?
}
} One inode is used per file. Use
}
} df -i
}
} to see the inode count.
}
} Please note I am talking about Netapp Wafl FS here.
}
} I am trying my best to find out the answers, if anybody can help cut
} short the time, will be great full.
}
} I sense a lot of confusion. I am not even sure if it is inodes
} you want. What you wrote seems to make more sense if the word
} "inode" is replaced with the word "block". If I were you I'd focus
} on learning the basic concepts as that will save more time and
} pain in the long run.

NetApp's WAFL, the "write-anywhere file layout", is enough different from
other filesystems (such as ufs, ntfs, or ext2fs) that advice which makes
sense for those filesystems does *not* necessarily apply to WAFL.

By default, NetApp's WAFL filesystem allocates one inode (to store one
file) for every 32K bytes of disk space in a volume.

The "df -i" command will work on a Linux client, but some NFS clients
may not support it. On the other hand, the NetApp filer itself (through
the Data ONTAP command-line interface) does support "df -i".

As Bakul has observed, there does seem to be confusion here.

Zorba, it is not clear what you mean by "I have to fill all indoes"
[presumably "inodes"?]. Does that mean that you want to create a
number of files equal to the "maxfiles" value? (The per-volume
value reported by the filer for the "maxfiles" command output.)
Are you running some kind of performance or NFS stress test?
Did you make some intentional adjustment of "maxfiles"?

While the touch command will create a zero-length file (thus
allocating an inode), this is not a typical usage pattern, so
any results from the test may not apply to real-life operation.

--
Edwin


Edwin,
--I mean inodes here ( esp after bakul's comment, I have really went
back to Morris Bach )
--I had modified default maxfiles count to what I needed
--Creating a soft link to a single data block seems to be an efficient
way, for now
--Scenario is to utilize maximum available inodes and then initiating
snapmirror resync. The filer should not panic
--Utilizing inodes is over NFS

By default, NetApp's WAFL filesystem allocates one inode (to store one
file) for every 32K bytes of disk space in a volume


For file disk space more than 32K, how does it work ?


best regards
Zorba
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
technical trivia: CPU usage inverse of hard drive usage John Doe Homebuilt PC's 1 November 1st 07 04:47 AM
PF usage. What is it? and RAM usage problems. greg77 via HWKB.com Storage (alternative) 12 September 25th 06 02:00 PM
High Page File Usage...Why ???? sypher0101 Homebuilt PC's 0 April 17th 05 02:15 PM
Swap file usage HC General 6 March 6th 05 03:05 PM
How do I fix a Service Controler Manager File Error (Category 7000) - file not found M. B. Storage (alternative) 0 February 6th 04 01:58 PM


All times are GMT +1. The time now is 12:40 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.