A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Netapp FC-SAN experiences anyone??



 
 
Thread Tools Display Modes
  #11  
Old February 21st 05, 07:59 PM
external usenet poster
 
Posts: n/a
Default

This paper was co-authored by a NetApp employee. This is only slightly
more informative than a NetApp filer brochure. I'm sure I could find
plenty of EMC and Hitachi white papers that would say exactly the
opposite. To me a disk intensive application makes more sense with 64k
fibre channel packets over a 2gb or 4gb storage network than with 1.5k
packets over a 1gb network. If I were to recommend something to my
employer I would want to base my decision on technical facts instead of
marketing propaganda.

Vic

  #13  
Old February 22nd 05, 01:53 PM
Yura Pismerov
external usenet poster
 
Posts: n/a
Default



Faeandar wrote:

Of course all this is a rant on FC v. ethernet but the original idea
still stands. There's ZERO reason not to run Oracle (or most other
db's) over NFS. And if you're looking to do clustered db environments


Just to be fair. I had negative experience of running PostgreSQL off a NetApp.
Switching to DAS gave significant performance improvement.
But again, nobody did anything in Postgres to optimize it for NFS, unlike Oracle...


(like Oracle RAC), NFS is the BEST option. Ask Oracle, after you get
them past OCFS that is... it really does blow.

~F



  #14  
Old February 23rd 05, 07:57 AM
Faeandar
external usenet poster
 
Posts: n/a
Default

On 21 Feb 2005 11:59:51 -0800, "
wrote:

This paper was co-authored by a NetApp employee. This is only slightly
more informative than a NetApp filer brochure. I'm sure I could find
plenty of EMC and Hitachi white papers that would say exactly the
opposite. To me a disk intensive application makes more sense with 64k
fibre channel packets over a 2gb or 4gb storage network than with 1.5k
packets over a 1gb network. If I were to recommend something to my
employer I would want to base my decision on technical facts instead of
marketing propaganda.

Vic


It's not propoganda. Well it is, I mean everyone wants to make money
so marketing plays a key role. But who was the other author on this
paper?

I can tell you that Oracle uses NetApp internally. Alot of it too.
In fact, Oracle became NetApps second largest customer in the last 2
years. That's not a coincidence. Part of the reason I'm sure is to
stick it to Veritas but you can't do that unless you have a viable
alternative.

And the filer is not just viable but awesome. Performance for all but
the most IO intensive applications rocks, the snapshot funtionality of
a filer is unsurpassed, and the DR or test environment scenario with
Snap* is the easiest bar none.

I've dealt with HDS in this arena and it is much more difficult to
make it all work. And performance was no better, in this case.

You mention packet size and pipe. Well alot of DB apps don't ask for
more than 1-4k per request. Meaning all that extra payload in FC is
wasted. So the pipe is not really an issue, but even supposing it is
you can still trunk as mentioned previously.

~F
  #15  
Old February 25th 05, 06:12 PM
StorProfi
external usenet poster
 
Posts: n/a
Default

Hi All!
We speak about Oracle. Perhaps you can help me with your experience
in another environment with a NetApp 270C and AIX / DB2 in SAN and
mixed with Windows in NAS.

Is right that NetApp is very good in NAS but not so good in pure SAN
FC environment ?

what about the Snapshots ? they are always 20% ? In all the
environments ?

How looks your experience in a heterogenous environment when the
NetApp shares access between NAS and SAN. If I good understand, is
SAN Block I/O emulated on top of WAFL and not native ?

I need to use a NetApp in an AIX / DB2 environment and I'm not sure if
this is the best solution.

There are anywehre troughput values for the NetApp in SAN environment
compared with a DS4300, or Clarion, or.... ( native FC solutions ) ???

Thanks a lot for all comments !!

Alfredo.
  #16  
Old February 25th 05, 07:07 PM
Yura Pismerov
external usenet poster
 
Posts: n/a
Default



StorProfi wrote:


what about the Snapshots ? they are always 20% ? In all the
environments ?


Size of the snapshots really depends on how often your data are being changed.
The more changes between the snapshots, the more space is being taken.


How looks your experience in a heterogenous environment when the
NetApp shares access between NAS and SAN. If I good understand, is
SAN Block I/O emulated on top of WAFL and not native ?


I believe WAFL is only for NAS.
I might be wrong, but in mixed environments NetApp head unit is connected to SAN and uses a pool of disks
from it to export the disk space to NFS clients. So you still can connect SAN disks directly without NetApp, or you can use NAS.
Obviously whatever disks are currently assigned to NetApp can not be attached to the SAN clients directly.



I need to use a NetApp in an AIX / DB2 environment and I'm not sure if
this is the best solution.

There are anywehre troughput values for the NetApp in SAN environment
compared with a DS4300, or Clarion, or.... ( native FC solutions ) ???



I'd say you will have a better option with HDS solutions that uses rebadged NetApp in their systems (that provide mixed SAN/NAS access).
In this case you will get advantage of the best NAS solution and you will have SAN capable storage under the same management "umbrella".


Thanks a lot for all comments !!

Alfredo.


  #17  
Old February 25th 05, 09:18 PM
Rob Turk
external usenet poster
 
Posts: n/a
Default

"Yura Pismerov" wrote in message
...
How looks your experience in a heterogenous environment when the
NetApp shares access between NAS and SAN. If I good understand, is
SAN Block I/O emulated on top of WAFL and not native ?


I believe WAFL is only for NAS.
I might be wrong, but in mixed environments NetApp head unit is connected
to SAN and uses a pool of disks
from it to export the disk space to NFS clients. So you still can connect
SAN disks directly without NetApp, or you can use NAS.
Obviously whatever disks are currently assigned to NetApp can not be
attached to the SAN clients directly.


As far as I know WAFL is being used throughout the Filer and on all attached
disks. iSCSI LUN's and FC-SAN LUN's are just files on top of the WAFL file
system, which allows NetApp to do all the fancy snapshot/cache/replication
stuff with SAN.

Rob


  #19  
Old February 28th 05, 04:49 PM
boatgeek
external usenet poster
 
Posts: n/a
Default

We have tons of NetApp storage (15 filer filers, 5 locations, 150 TBs)
and we use NFS connections for oracle financials, peoplesoft,
documentum. It better be supported by Oracle, because Oracle
themselves are the largest users of NetApp storage. NFS works
wonderfully, great performance and extremely flexible. Oracle dba's
launch and rsh script to put the database in hotbackup mode, take a
snapshot and then take it out of hotbackup mode. They love it because
it takes 5 seconds. They've recover oracle databases a lot using
this.

Regarding the filers as SAN storage, we use it for SQL and exchange.
Functionality is the key as the snapshots are controlled by the local
clients
the snapshot are consistent and recoverable (vs a traditional SAN
snapshot). The clients can have up to 254 of them, so they have a lot
of flexibility as to how often they take snapshots.

We also have equal amounts of storage on SANs. Mainly, SANs are very
good for a database which can never go down. NetApp isn't N+1
architecture and realistically you should tell your users that they
should plan on maybe 2 to 3 15 minute outages per year for maintenance.
Also if someone came to me with performance numbers which had 100s of
MBs per sec of throughput needed, then I would put them on a SAN. But
realistically, the only devices that can do that are large UNIX servers
which have high loads. Few people really have that, though most think
they do until they actually look at the numbers.

Let me know if I can be of any further help.

Doug Vibbert

  #20  
Old March 1st 05, 06:43 PM
boatgeek
external usenet poster
 
Posts: n/a
Default

One quick note. NEVER on a netapp pick NTFS as the volume type upon
which you create a LUN. NEVER. Usually people have CIFS shares off
the netapp filers and eventually someone will want to do auditing on a
particular directory for security purposes. If you enable auditing,
it will then "audit" the NTFS volume with the LUN, and it will
absolutely kill the performance. It has to be UNIX permissions (even
if it has an volume such as sql for a windows server). We did that
once and everyone called up and asked why it was such a dog on
performance. When we switched the permission style to UNIX, it was 4
times the speed of the servers local disks.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
snapshot schemes in emc and netapp vidyesh Storage & Hardrives 8 November 24th 04 09:37 PM
emc ns 700 v/s. netapp f 980 vidyesh Storage & Hardrives 13 August 27th 04 04:25 PM
iSCSI on NetAPP as Target and Windows 2003 Software initiator Moshiko Storage & Hardrives 6 February 17th 04 05:32 PM
Alternative for NetApp F825c (for CIFS & iSCSI) Benno... Storage & Hardrives 4 January 19th 04 06:20 PM
remote management interface for NetApp Filers asdf Storage & Hardrives 6 January 12th 04 09:10 PM


All times are GMT +1. The time now is 07:22 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.