View Single Post
  #12  
Old February 5th 07, 10:52 PM posted to comp.arch.storage
[email protected]
external usenet poster
 
Posts: 18
Default ILM and Full Text Search

On Feb 3, 7:44 pm, Faeandar wrote:
On Sat, 03 Feb 2007 08:06:42 -0500, Nik Simpson

wrote:
Faeandar wrote:


Yes, they could do that, but then so could every other competitor, NDMP
is available to anybody, not just Index Engines. EMC does something
similar, though probably proprietary with it's classification product
which gets a "dump" of metadata from Celerra file servers rather walking
the file system over the network.


Any/every other product could but, so far as I've seen, do not. That
one bit is intriguing enough to me to look at them.



I may have been asking far too open ended a question. My needs are
fairly simple; tell me what, where, how big, how frequently accessed,
what type of file, etc. I've no need for a deep dive of content.


Index Engines wouldn't be a solution then, since to the best of my
knowledge it's all about content indexing & search. However, both
Scentric and Kazeon can do what you want without having to generate a
content index.


We have Kazeon on eval and so far I can't say I'm impressed. It's
quite slow. Getting data on an entire filer would take many weeks
based on performance tests. It took 4 days to run a single qtree on a
filer.


Is this for the kazeon to crawl the filer? How much data is on that
filer? And how many files is that data in?
Is it crawling the filer via the FPolicy link or via a NFS link?




I'm looking for typical SRM stats, but on a fair scale.


So you don't actually want to take any actions like migrating little
used stuff to tier2?


That is correct. No automated migrations or anything. I want
information that me and my staff can make decisions based on, but our
needs are not simple enough for policy based file migration.

Anyway, both Scentric and Kazeon offer extensive
SRM reporting, though if reporting is all you want, you might want to
take a look at Monosphere which has a pure file SRM solution. How big is
a "fair scale" to you, 10s, 100s, 1000s of TB?


I thought Monosphere was more of a trending and analysis tool? Not
file level reporting. We are slated to eval them for a different
purpose but I'll keep them in mind for this as well.
Fair scale would be 100's of TB.

~F