A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Can I upgrade memory on an F720?



 
 
Thread Tools Display Modes
  #1  
Old April 6th 04, 11:33 AM
Thanatos
external usenet poster
 
Posts: n/a
Default Can I upgrade memory on an F720?

Hi Mark,

In NetApp's early days they used to allow customers to upgrade RAM - in fact
their SE's would sometimes recommend RAM or CPU upgrades on the early 486
based (pre Alpha and PIII) F2xx and F3xx series filers, based on workload
and capacity considerations.

From the F540 or F640 on (can't remember which), to reduce support costs
they adopted a policy of supporting RAM only to find that there was little
or no improvement in performance. Interesting, NetApp "capacity limits"
tuned configurations. What they had discovered was that there were optimal
configurations and that outside those the benefit of partial upgrades was
not worth the expense. I.e. lots of customers would upgrade CPU and/or have
nothing at all to do with how much physical disk can be connecting, it's all
about maintaining their performance reputation.

I.e. I can connect way more physical storage to an IBM xSeries e365 box
running Windows Server 2003 than to a NetApp G825. However, in a wide
sharing I/O environment (i.e. home dirs for a few thousand users),
performance on the Windows box will degrade much faster due to NTFS's lousy
random I/O performance. This is without even considering filesystem
reliability and failure modes - in which case you can forget about Windows
;o)

Note - you will need ECC registered SDRAM DIMM's.

Please let the group know if you do gain a noticeable improvement - however
don't be surprised if you don't.

What is the bottleneck you are experiencing, and what load and applications?
Is it streamed or transactional i/o (i.e. flat files, Oracle databases -
OLTP or data warehouse etc)?

"Mark" wrote in message
et...
"Mark Smith" wrote in message
...

Currently have a F720 with 256MB RAM.

This is a noticable bottle neck on our system.

Netapp will say its not upgradeable, we have to buy a new system!

Has anyone managed to put more memory in a F700 series box?

Currently has 4 x 64MB DIMMs... I'm thinking of installing 4 128MB

DIMMs

If you call NetApp they'll so no ... it comes as it comes .. you want

more
memory you'll be told to buy a new head .. good eh ? :-)



Hmmm.. just as I thought!, but I've decided to just give it a go!
I've ordered some128MB DIMMS and will plonk them in a see what happens!

Mark




  #2  
Old April 6th 04, 04:17 PM
Mark
external usenet poster
 
Posts: n/a
Default


"Thanatos" wrote in message
...
Hi Mark,

In NetApp's early days they used to allow customers to upgrade RAM - in

fact
their SE's would sometimes recommend RAM or CPU upgrades on the early 486
based (pre Alpha and PIII) F2xx and F3xx series filers, based on workload
and capacity considerations.

From the F540 or F640 on (can't remember which), to reduce support costs
they adopted a policy of supporting RAM only to find that there was little
or no improvement in performance. Interesting, NetApp "capacity limits"
tuned configurations. What they had discovered was that there were optimal
configurations and that outside those the benefit of partial upgrades was
not worth the expense. I.e. lots of customers would upgrade CPU and/or

have
nothing at all to do with how much physical disk can be connecting, it's

all
about maintaining their performance reputation.

I.e. I can connect way more physical storage to an IBM xSeries e365 box
running Windows Server 2003 than to a NetApp G825. However, in a wide
sharing I/O environment (i.e. home dirs for a few thousand users),
performance on the Windows box will degrade much faster due to NTFS's

lousy
random I/O performance. This is without even considering filesystem
reliability and failure modes - in which case you can forget about Windows
;o)

Note - you will need ECC registered SDRAM DIMM's.

Please let the group know if you do gain a noticeable improvement -

however
don't be surprised if you don't.

What is the bottleneck you are experiencing, and what load and

applications?
Is it streamed or transactional i/o (i.e. flat files, Oracle databases -
OLTP or data warehouse etc)?



Hi, Thanks for the historical info,
The bottleneck as confirmed by NetApp Engineers is that we have far too many
files/directories in each directory.
IE: we have too many very large directories.
Netapps explanation was that the memory was running out of room to put the
directory cache and therefore had to re-read the directory for almost every
file read.

Symptoms show it is reading 4 times more data off disk than it is outputing
to the network.

Data is a mixture of Lotus Notes Databases ( 8,000 dbs per directory ) (
each also has a full text index.directory!!!)
and general small flat files ( 1k - 100k )

Basically , we went to Netapp and said the performance of the filers were
crap!
They came and checked it out and said nothing. They would not agree nor
disagree.
They were more interested in doing us out of another 100 grand.

Mark.




  #3  
Old April 26th 04, 01:39 AM
Thanatos
external usenet poster
 
Posts: n/a
Default

Hi Mark,

Good info - this does make sense.

NetApp use an unusual method for dealing with large directory performance -
they maintain a full filesystem directory name cache in RAM - more
appropriately called a hash table as it is a complete map of the directory
structure. As long as there is sufficient RAM to deal with this the
performance gains compared to using a more traditional name cache can be
significant. The problem you have hit is that since you have run out of RAM,
the re-reads bring the performance back down to what you would expect from a
more traditional UFS-style filesystem.

They have an excellent white paper on this he

http://www.netapp.com/tech_library/3006.html

Note that the benchmarking in this paper is based on very old platforms for
both the UFS and WAFL testing, but if you scale them forward to current
technologies the comparitive results should be similar between comparable
hardware configurations.

In your particular circumstance extra RAM should help as you have surmised.

Bear in mind that if you were to move your filesystems in their current
structure to a Unix box of similar hardware spec to your filer you would
expect much worse performance than you are seeing now, as described in the
paper above.

NTFS uses an interesting btree style search algorithm for large directory
lookups that should yeild better performance that a linear search or name
cache method as well. However, in practice other issues with NTFS layout
(MFT lookup seeks etc.) mean that the general performance for complex
directory structures is so bad that the btree search method is necessary
just to bring it's performance levels back to what would be considered
acceptable for most Unix filesystems (UFS, JFS, AdvFS, XFS etc.)

WAFL is absolutely the right filesystem technology for the applications you
describe - you're simply running an underpowered box as they have suggested.
I used to work for a systems integrator who was a NetApp reseller (the
company I work for now does much more with HDS), and last year most of our
clients who were running 700 series filers upgraded to 800 series filers
(810's and 825's) simply because they were hitting the upward curve on cost
of support. This situation is not unique to NetApp- IBM, HDS, EMC, Stotek
etc all do the same thing as maintaining support for older hardware starts
to become cost prohibitive for the vendor. Most enterprise class storage
solutions are aged out after ~3-4 years due to obsolescence - i.e. there
aren't many 5 year old Symm's out there running in live environments. The
fact that a five year old F720 can still keep up with most new server
platforms in most situations is nothing short of amazing, but really you
should have been looking at an upgrade before now anyway.

I'm surprised at the cost you mentioned - most of my clients who upgraded
last year did it for well under AUD$80K (or less than US$50K) to go to an
810 or 825.



"Mark" wrote in message
et...

"Thanatos" wrote in message
...
Hi Mark,

In NetApp's early days they used to allow customers to upgrade RAM - in

fact
their SE's would sometimes recommend RAM or CPU upgrades on the early

486
based (pre Alpha and PIII) F2xx and F3xx series filers, based on

workload
and capacity considerations.

From the F540 or F640 on (can't remember which), to reduce support costs
they adopted a policy of supporting RAM only to find that there was

little
or no improvement in performance. Interesting, NetApp "capacity limits"
tuned configurations. What they had discovered was that there were

optimal
configurations and that outside those the benefit of partial upgrades

was
not worth the expense. I.e. lots of customers would upgrade CPU and/or

have
nothing at all to do with how much physical disk can be connecting, it's

all
about maintaining their performance reputation.

I.e. I can connect way more physical storage to an IBM xSeries e365 box
running Windows Server 2003 than to a NetApp G825. However, in a wide
sharing I/O environment (i.e. home dirs for a few thousand users),
performance on the Windows box will degrade much faster due to NTFS's

lousy
random I/O performance. This is without even considering filesystem
reliability and failure modes - in which case you can forget about

Windows
;o)

Note - you will need ECC registered SDRAM DIMM's.

Please let the group know if you do gain a noticeable improvement -

however
don't be surprised if you don't.

What is the bottleneck you are experiencing, and what load and

applications?
Is it streamed or transactional i/o (i.e. flat files, Oracle databases -
OLTP or data warehouse etc)?



Hi, Thanks for the historical info,
The bottleneck as confirmed by NetApp Engineers is that we have far too

many
files/directories in each directory.
IE: we have too many very large directories.
Netapps explanation was that the memory was running out of room to put the
directory cache and therefore had to re-read the directory for almost

every
file read.

Symptoms show it is reading 4 times more data off disk than it is

outputing
to the network.

Data is a mixture of Lotus Notes Databases ( 8,000 dbs per directory ) (
each also has a full text index.directory!!!)
and general small flat files ( 1k - 100k )

Basically , we went to Netapp and said the performance of the filers were
crap!
They came and checked it out and said nothing. They would not agree nor
disagree.
They were more interested in doing us out of another 100 grand.

Mark.






 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Dimension 8100 RDRAM Memory Upgrade Sideshow Bob Dell Computers 12 December 17th 04 05:21 PM
Gigabyte 8KNXP Memory Upgrade Question Glenn M Homebuilt PC's 1 December 13th 04 10:28 PM
my new mobo o/c's great rockerrock Overclocking AMD Processors 9 June 30th 04 08:17 PM
Dell Workstation 420 memory upgrade problem Mr Ter Dell Computers 8 November 20th 03 10:58 PM
Memory Upgrade Question (a different question) WSZsr Dell Computers 1 July 15th 03 12:58 AM


All times are GMT +1. The time now is 05:16 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.