A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Cost of storage calculator?



 
 
Thread Tools Display Modes
  #21  
Old April 28th 06, 04:23 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

Would happen to know how much this storewiz device costs?
What about the data that's already on the NAS before the storewiz
device is put in place? Looks like it only compresses new data flowing
through the device (on-the-fly compression/decompression), which means
one has to "empty" the NAS device first after intorducing storewiz into
the network and then bigin to "fill" it to get the benefit.
-G

  #22  
Old April 28th 06, 04:33 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

On 27 Apr 2006 23:51:17 -0700, wrote:

So one example that we got from the vendor was something along the
following lines... sort of an ROI calculator (my original topic)...

In this case, the company was addng about 25 - 50TB a month of new
data... the company uses an FA960e (which costs about 225K with out any
shelves populated) and can host about 84TB max (raw space... logical
space when taking RAID1 into consideration is much lower... around
50TB)....

So for a company with growth like that, I guess such a product would
make sense? I guess... based on your comments Bill and Faeandar, would
it be accurate to say that reducing the amount of storage entering the
primary storage is only valuable if the company has a huge growth rate
of new data?

/l


My initial thought is that if you're adding that much data per month
you probably don't need high performance storage for all of it, so
your costs should significantly drop if you figure out what needs
performance and what doesn't.

Now if you really are adding that much high performing data then you
have a bigger problem.

I've talked to this vendor and they claim they only introduce minimal
latency but 1) I have yet to prove that, 2) the compression ratio they
use in their marketing is the absolute best case scenario, and 3) they
did not have failover capability when I spoke with them.

~F
  #23  
Old April 28th 06, 04:34 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

On 28 Apr 2006 08:23:55 -0700, "Sto Rage©"
wrote:

Would happen to know how much this storewiz device costs?
What about the data that's already on the NAS before the storewiz
device is put in place? Looks like it only compresses new data flowing
through the device (on-the-fly compression/decompression), which means
one has to "empty" the NAS device first after intorducing storewiz into
the network and then bigin to "fill" it to get the benefit.
-G


It can do a read/re-write of the data to compress in the background
with spare cycles. I beleive it is throttleable as well.

~F
  #24  
Old April 28th 06, 05:19 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

Thats Correct Faeandar... it can compress existing data as well...

I agree that its implementation doesent fully match marketing yet...
but I guess my question is... *if* the product were to match marketing
and be able to introduce minimal latency and at the same time compress
atleast about 60%, would this be a type of technology to invest in?

I see all the arguments that I need to cut down on the cost of
management of the data... and beleive me, we are doing everything we
can to do that using products from Archivus etc... But in addition, we
do spend a lot of money on NetApp filers that these kind of products
seem to be able to help with... Not to mention, since I started this
thread, in doing more research into our environment, I am amazed to see
how much money we are paying on energy bills alone with all the storage
equipment we have...

/l

  #25  
Old April 28th 06, 06:56 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Storage compression (was: Cost of storage calculator?)

In article . com,
wrote:
Thats Correct Faeandar... it can compress existing data as well...

I agree that its implementation doesent fully match marketing yet...
but I guess my question is... *if* the product were to match marketing
and be able to introduce minimal latency and at the same time compress
atleast about 60%, would this be a type of technology to invest in?


Storage compression is fun. It is quite easy if the data is only
written and never modified or read. Except for the fact that it
requires quite a bit of CPU power, which has to come from somewhe
either extra CPU boxes have to be introduced in the storage stack
(which cost money and add complexity and unreliability), or existing
CPUs in the disk arrays / NAS boxes have to be used (which slows down
the storage systems), or the compression runs like a device driver or
loadable file system on the application server (where compression uses
expensive CPU cycles on a machine that was bought to run the
customer's application, not to masturbate the customers data).

Reading the compressed data sequentially from the beginning is
typically easy. Reading it randomly can be hard if decompression is
implemented carelessly. Reading little blocks in the middle can be
very hard, if decompression relies on compressing the whole stream
sequentially in large chunks.

What can be catastrophic is modifying (overwriting) the data in place.
First off, many compression algorithms rely on finding similarities
within the data stream, and modifying the data disrupts them, so the
new data is typically larger (compresses less). If the new data is
larger (compresses less), then the storage system has to virtualize
the new data and store it outside the hole in the file. If you do
that for a while, the originally file layout becomes completely
chaotic, and both reading and writing speed goes to hell, and the
metadata overhead and complexity of the stored files becomes a big
mess. Furthermore, it is very difficult (but not impossible) to
implement a storage system that can move blocks of data around within
a file, and is correct and doesn't lose or corrupt data, even in the
face of system failures. Example: What if the compression system is
in the middle of updating the file to indicate (typically in some
metadata) that one block had to be moved to the end because it is less
compressible, and then the power fails, and this complex multi-phase
update is only partially recorded on disk? There are ways around this
(which typically involve logging, hardware NVRAM, and very careful
ordering of operations), but those require serious thought and great
care in the implementation.

One more hair in the soup: Some data doesn't compress very well.
Examples include images (for example in JPEG format), documents (in
PDF format, which is often internally compressed), backups (which are
often compressed by the backup software), and archival data such as
mail archives (which are usually compressed by the archiving
software). If you are running interestingly complex ILM software, you
probably already have more compression going on in the software stack,
and then adding one more compression layer won't help much.

One technique that is closely related to compression is duplicate
elimination: Don't store copies of files (or blocks or mail messages)
if the content is identical. This really helps with backups of
desktop workstations (because every machine has a copy of the MS Excel
DLLs, which are mostly identical), and sometimes helps with mail
archiving (because the same 5MB spreadsheet attachment is forward 100
times within the same mail system, meaning that 100 copies of it are
in the mail archive). But again, be warned: some ILM software already
contains such duplicate elimination, so doing it again in the software
stack can be pointless and wasteful.

I see all the arguments that I need to cut down on the cost of
management of the data... and beleive me, we are doing everything we
can to do that using products from Archivus etc...


This is really the place where compression can shine: data that is
written once, never modified, and not read all that often. Examples
include backup, reference data, and compliance archives. But the
above warnings still apply, compression is not a panacea.

But in addition, we
do spend a lot of money on NetApp filers that these kind of products
seem to be able to help with... Not to mention, since I started this
thread, in doing more research into our environment, I am amazed to see
how much money we are paying on energy bills alone with all the storage
equipment we have...


Absolutely true. Here is my new rule of thumb: For every $1 you spend
on storage systems, you will spend another $1 on energy and
infrastructure costs (that includes air conditioning and floorspace
for it) over the lifetime of the hardware, and anywhere between $3 and
$15 on system administration and management (a good fraction of which
goes into avoiding, planning for, and managing failures of the storage
system). And if you buy a tape drive for $1, you can easily spend
anywhere between $10 and $100 on the blank tapes required to operate
it.

Also remember that management overhead doesn't just scale with the
size of the storage system in GB, but also with the complexity of the
storage system. A Netapp with 80 disks is only a little harder to
administer than a Netapp with 60 disks. But a Netapp with a separate
compression system installed is a lot harder to administer than just a
Netapp. It might be much cheaper to throw a few dozen disks at the
problem than have another moving part in an already complex system.

From this point of view, an investment of $0.40 in compression
hardware/software that makes your storage 30% more space efficient,
but increases the management overhead (for example because it reduces
reliability by 20%), may be very foolish:

Befo
$1 for storage system
$1 for energy/cooling/floorspace
$10 for management
= $12 lifetime cost

After:
$0.70 for storage system
$0.70 for energy/cooling/floorspace
$0.40 for compression system
$12 for management
= $13.80 lifetime cost

--
The address in the header is invalid for obvious reasons. Please
reconstruct the address from the information below (look for _).
Ralph Becker-Szendy
  #26  
Old April 28th 06, 07:19 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Storage compression (was: Cost of storage calculator?)

Thanks Ralph... With the storewiz product they claim that management
overhead is nil (I doubt this, but lets just say its true), then would
such a technology (even if not from storewiz) be appropiate to invest
it?

I can attest to the fact that thus far, I've had no issue with
StoreWiz's reliability... it actually isnt taking me too much time to
manage it in my dev lab... almost none at all...

/l

  #27  
Old April 28th 06, 07:23 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

On 28 Apr 2006 09:19:38 -0700, wrote:

Thats Correct Faeandar... it can compress existing data as well...

I agree that its implementation doesent fully match marketing yet...
but I guess my question is... *if* the product were to match marketing
and be able to introduce minimal latency and at the same time compress
atleast about 60%, would this be a type of technology to invest in?

I see all the arguments that I need to cut down on the cost of
management of the data... and beleive me, we are doing everything we
can to do that using products from Archivus etc... But in addition, we
do spend a lot of money on NetApp filers that these kind of products
seem to be able to help with... Not to mention, since I started this
thread, in doing more research into our environment, I am amazed to see
how much money we are paying on energy bills alone with all the storage
equipment we have...

/l


There is no one on this board that can tell you if it's worth it or
not. That's an individual requirements issue.

If all the things you mentioned are benefits to your company, why
would you not? That was not rhetorical, look to the reasons this is
not a good idea even though it meets all your obvious requirements.
ie.
do you need failover?
is it worth it to buy a storewiz per filer (since they only supported
one each when I looked)?
is that worth the management and potential issues that go along with
it?
Whatever other issues there may be...

It's something only you can decide, I think alot of pro's and con's
have been covered. You can always eval it too.

~F
  #28  
Old April 28th 06, 07:41 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Cost of storage calculator?

I understand Faeandar ... I am just trying to see if I am investing in
the correct type of technology... I dont want to buy something that is
seen as non standard

/l

  #29  
Old April 28th 06, 09:04 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Storewiz compression - other protocols? (was: Cost of storage calculator?)

"Faeandar" wrote in message
...
is it worth it to buy a storewiz per filer (since they only supported
one each when I looked)?
is that worth the management and potential issues that go along with
it?
Whatever other issues there may be...

It's something only you can decide, I think alot of pro's and con's
have been covered. You can always eval it too.

~F

Looking at their website, it only mentions support for CIFS and NFS. Does
anyone know if it supports other protocols, like NetApp's Snapvault and
SnapMirrors.
Here's where I see its potential use for us.
In our environment we use NetApp's SnapVault to backup our primary filers
to remote R200s. These R200s have a long retention period, like 14 nightly
and 13 weekly of snapshots. We also use their OSSV (Open System SnapVault)
to backup Windows and Unix boxes. These too have the same retention. What we
have seen on many of the systems, these snapshots consume a lot of space,
some times 300% more space than the source. This is due to large log/text
files that get rotated on a daily basis on the source. These could easily be
compressed, so a device like Storewiz sitting before these R200 could
compress the snapvault deltas in real-time thus saving us a good chunk of
space on the R200s.Not sure how much these units cost though, but then if
they are like $100K each it would be pointless . We could as well add more
shelves and not deal with another piece of hardware to manage.

-G





  #30  
Old April 28th 06, 09:14 PM posted to comp.arch.storage
external usenet poster
 
Posts: n/a
Default Storewiz compression - other protocols? (was: Cost of storage calculator?)

While I havent bought the storewiz system yet (we are still evaluating
it), we could get the box for around 40K... I am looking at it from a
long term perspective and over a course of 6 months, I would certainly
have gotten more than my money back... and future expenditure would
also be less... but then again, its not something I've ever bought
before. Cutting a PO for a filer is a standard operation... cutting a
PO for disks is standard too... storewiz would have to be justified.
So I am looking for help from this group if this is a valid sort of
technology to invest in...

/l

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Is this true about Canon printhead? Patrick Printers 85 May 11th 05 06:41 PM
A caution on Canon printers Andrew Mayo Printers 80 April 26th 05 06:12 PM
What am I doing wrong ??? Or is Adaptec 21610SA just a crappy RAID card ? news.tele.dk Storage & Hardrives 160 December 28th 04 05:34 AM
Enterprise Storage Management (ESM) FAQ Revision 2004/02/16 - Part 1/1 Will Spencer Storage & Hardrives 0 February 16th 04 10:23 PM
SAN (Storage Area Network) Security FAQ Revision 2004/02/16 - Part 1/1 Will Spencer Storage & Hardrives 0 February 16th 04 10:02 PM


All times are GMT +1. The time now is 03:29 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.