A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Network Storage Help



 
 
Thread Tools Display Modes
  #11  
Old January 18th 08, 04:15 AM posted to comp.arch.storage
Who Cares
external usenet poster
 
Posts: 3
Default Network Storage Help

the wharf rat wrote:
In article ,
scotv453 wrote:
Now for the question(s)? Should I go with a SAN or a NAS? Keep in mind


NAS.

The advantages of a SAN relate to pure performance and its ability
to share storage as block devices. That means that the SAN looks like any
other disc drive to the host so to, say, make a network share you'd need to
attach the SAN device to a Windows file server then share the file system
you create. But SANs are complicated to set up and administer, and still
more expensive than simple network filesystems.

Running a network filesystem over gigabit links will certainly provide
adequate performance, and you can most probably find a NAS device that
interacts with Unix as well as Windows. I don't see anything in your list
that requires the kind of speed or flexibility a SAN provides and so can't
see any reason to recommend one over an probably cheaper and certainly easier
to manage NAS.

If you're comfortable with Linux and want to save money there's
no reason not to buy a reasonable SCSI (or even SATA!) disc array (make
sure to plan for adequate expansion both in volume and throughput) and
use a Linux server as the NAS device. The redundancy will be built into
the array - if the Linux server dies completely it's easy enough to swap
in something temporary - and if you DO decide to experiment with SAN storage
Linux supports iSCSI just fine. And yes, you will need some kind of backup
device.


It typically comes down to whether you need to access the same files or
file systems from more than one client. If that is your access, then
NAS tends to make more sense. NAS also allows you to grant a rather
broad range of access controls. Common access for CIFS and NFS is a
pretty standard feature, implementations differ on how that mapping is
done.

It is a bit harder to convert a SAN to a multi client access since
unless you have software in the multiple clients that allows safe access
to the same virtual drive, you will corrupt data immediately. However
if you do go SAN and later need to go multi client, just use a bigger
san host with a large memory and multiple processors as a file server.

SAN is indeed harder to administer.

  #13  
Old January 20th 08, 11:56 AM posted to comp.arch.storage
[email protected]
external usenet poster
 
Posts: 4
Default Network Storage Help

On Jan 20, 8:29*am, (the wharf rat) wrote:
In article ,

wrote:

I am going to rule expensive NAS boxes out, as you want to do bth


* * * * Why? *What about NAS prevents either? *How do you do "generic
file share" with a SAN? *Unless you buy a (very) expensive NAS head
for your EMC SAN...

* * * * I know SANs are really cool but unless you explicitly need shared
block level storage (say for an Oracle cluster) is there any reason to
prefer one in this situation?


so... i didnt say to buy an expensiv NAS box...
NAS boxes are traditionally crap at block storage, but SAN arrays can
be used for all sorts.

Just use a windows server with HBA's day one, and this will serve its
purpose. As the infrastructure expands - then look to use a nas head
(possibly)... OnSTOR make some decent device, and NetAPP will now
qualify their NAS heads with generic storage (a number of large
accounts have NetAPP heads working with HDS, EMC & 3PAR storage
subsystems)

At the moment cost and flexibility is key for this gent at the moment.

Cheers,

B
  #15  
Old January 20th 08, 09:05 PM posted to comp.arch.storage
Maxim S. Shatskih
external usenet poster
 
Posts: 87
Default Network Storage Help

The advantages of a SAN relate to pure performance and its ability

Why use SAN if you have no cluster-capable server software? put the same disk
drives inside the usual server and install the usual OS there.

Why use NAS if you can buy an InWin case, Asus mobo, Intel CPU (for SMB file
serving, Celeron is enough) and Kingstone memory, several Seagate drives,
assemble all of this yourself within 40 minutes, install the commodity server
OS (Windows, Linux or FreeBSD - choose your favourite, BTW, Windows is very
good in SMB file serving performance) - and go on?

If something will break, just replace it. It takes 4 hours, not more, for the
IT guy to replace the failed mobo, the time includes the visit to the store to
buy it. You are also free to have spare parts, making this time to be 20
minutes or so.

Proper backup software (I know 3 good disk imaging backup products, and several
file-level ones) will save you from the danger of hard disk failure. Mirroring
is also capable of this.

Snapshot technologies are here in most backup software (where they belong
primarily), in Windows Server (below the filesystem) and in late FreeBSD
releases (at filesystem level). I think in Linux too.

So, looks like paying for a specialized NAS machine is an overkill, unless you
have a critical requirement of 24x7 uptime and hot vendor's support.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation

http://www.storagecraft.com

  #16  
Old January 21st 08, 05:22 PM posted to comp.arch.storage
the wharf rat
external usenet poster
 
Posts: 34
Default Network Storage Help

In article ,
Maxim S. Shatskih wrote:

Why use SAN if you have no cluster-capable server software? put the same disk
drives inside the usual server and install the usual OS there.


Storage consolidation is generally a good idea. Local storage has
significant disadvantges in terms of performance and reliability. A storage
network lets you make efficient use of available storage, simplifies
backups and recoveries, allows you to build a single ruggedized array, and
so on.

Why use NAS if you can buy an InWin case, Asus mobo, Intel CPU (for SMB file
serving, Celeron is enough) and Kingstone memory, several Seagate drives,


Because not everyone has the time, head count, or skill set
available for such a project. Also, commercial NAS servers like NetApp
offer a feature set that would be difficult and expensive to replicate
in a home built. BTW, if you cost it out you'll see that a roll your own
isn't a *whole* lot cheaper than an off the shelf solution.

  #19  
Old January 22nd 08, 04:30 AM posted to comp.arch.storage
Faeandar
external usenet poster
 
Posts: 191
Default Network Storage Help

On Sun, 20 Jan 2008 02:56:03 -0800 (PST), "
wrote:

On Jan 20, 8:29*am, (the wharf rat) wrote:
In article ,

wrote:

I am going to rule expensive NAS boxes out, as you want to do bth


* * * * Why? *What about NAS prevents either? *How do you do "generic
file share" with a SAN? *Unless you buy a (very) expensive NAS head
for your EMC SAN...

* * * * I know SANs are really cool but unless you explicitly need shared
block level storage (say for an Oracle cluster) is there any reason to
prefer one in this situation?


so... i didnt say to buy an expensiv NAS box...
NAS boxes are traditionally crap at block storage, but SAN arrays can
be used for all sorts.

Just use a windows server with HBA's day one, and this will serve its
purpose. As the infrastructure expands - then look to use a nas head
(possibly)... OnSTOR make some decent device, and NetAPP will now
qualify their NAS heads with generic storage (a number of large
accounts have NetAPP heads working with HDS, EMC & 3PAR storage
subsystems)

At the moment cost and flexibility is key for this gent at the moment.

Cheers,

B


My experience is that applications that claim they need block access
are incorrect. I think SAN is a niche technology, especially
considering 70% of all data is unstructured files.

And Oracle *recommends* using NFS with Oracle RAC, so clustered Oracle
on NAS is preferred.

The only things I can think of that *need* _shared_ block level
storage are clustered file systems. Other that that, everything I've
ever run across will work on NAS.

Caveat: I don't do Windows.

~F
  #20  
Old January 22nd 08, 07:41 AM posted to comp.arch.storage
Cydrome Leader
external usenet poster
 
Posts: 113
Default Network Storage Help

Faeandar wrote:
On Sun, 20 Jan 2008 02:56:03 -0800 (PST), "
wrote:

On Jan 20, 8:29?am, (the wharf rat) wrote:
In article ,

wrote:

I am going to rule expensive NAS boxes out, as you want to do bth

? ? ? ? Why? ?What about NAS prevents either? ?How do you do "generic
file share" with a SAN? ?Unless you buy a (very) expensive NAS head
for your EMC SAN...

? ? ? ? I know SANs are really cool but unless you explicitly need shared
block level storage (say for an Oracle cluster) is there any reason to
prefer one in this situation?


so... i didnt say to buy an expensiv NAS box...
NAS boxes are traditionally crap at block storage, but SAN arrays can
be used for all sorts.

Just use a windows server with HBA's day one, and this will serve its
purpose. As the infrastructure expands - then look to use a nas head
(possibly)... OnSTOR make some decent device, and NetAPP will now
qualify their NAS heads with generic storage (a number of large
accounts have NetAPP heads working with HDS, EMC & 3PAR storage
subsystems)

At the moment cost and flexibility is key for this gent at the moment.

Cheers,

B


My experience is that applications that claim they need block access
are incorrect. I think SAN is a niche technology, especially
considering 70% of all data is unstructured files.

And Oracle *recommends* using NFS with Oracle RAC, so clustered Oracle
on NAS is preferred.

The only things I can think of that *need* _shared_ block level
storage are clustered file systems. Other that that, everything I've
ever run across will work on NAS.

Caveat: I don't do Windows.

~F


SANs exist for more than "shared block level storage". They exist for
massive amounts of reliable storage. This is handy for dozens or hundreds
of machines. If you need more space on host X, you carve it up and use it.
It's very convenient, and you don't have the overhead of NFS and NFS tuning
issues either, not to mention dual paths over FC drives the reliability way
up, and newer 2 and 4Gb FC cards and switches are plain out faster than
ethernet anways.

As for security, NFS has none. On a SAN, you'd have a much harder time
accessing data that's not yours.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Network Storage [email protected] Storage (alternative) 1 June 16th 06 07:17 PM
Looking for network print server that also provides network connected storage and other features. G.L. Cross Printers 0 January 16th 06 06:03 PM
Network storage for home network (wifi or not?) [email protected] Storage & Hardrives 27 January 13th 06 12:40 PM
Network storage for home network (wifi or not?) [email protected] Storage (alternative) 28 January 13th 06 12:40 PM
Network Storage? [email protected] Storage (alternative) 2 September 5th 05 06:04 PM


All times are GMT +1. The time now is 06:54 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.