A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Designing A File Server With Best Price Performance?



 
 
Thread Tools Display Modes
  #11  
Old September 1st 06, 04:11 AM posted to comp.arch.storage
Bill Todd
external usenet poster
 
Posts: 162
Default Designing A File Server With Best Price Performance?

Will wrote:
"Moojit" wrote in message
...
I think you may be focusing on the wrong areas. DISK performance and NIC
performance should be the priority, less on 64bit. If you get a NIC that
has it's own TOE, 64 bit and processor become less important. For the
DISKS, a RAID controller that supports WRITE BACK cache in addition to

READ
cache will be optimal.


64 bit is required to get support for more than 4 GB of memory.


Actually, it's not: Intel's IA32 processors (at least in their server
lines) have supported up to 64 GB of RAM since 1996.

I don't think that standard 32-bit Windows file systems can use the
additional RAM for caching, however: you might check to see whether
some file system on Linux or *BSD does if you're interested.

- bill
  #12  
Old September 1st 06, 04:39 AM posted to comp.arch.storage
Will
external usenet poster
 
Posts: 338
Default Designing A File Server With Best Price Performance?

"Bill Todd" wrote in message
news:dvmdnUQAqMVpOWrZnZ2dnUVZ_rGdnZ2d@metrocastcab levision.com...
64 bit is required to get support for more than 4 GB of memory.


Actually, it's not: Intel's IA32 processors (at least in their server
lines) have supported up to 64 GB of RAM since 1996.


Unfortunately, since it is a Windows based file server, what would matter
would be the additional limitations of the Microsoft OS and not the
hardware. For 32-bit Windows Standard Edition server, the maximum memory
utilization appears to be 4 GB. The 64-bit Windows Standard Edition jumps
that to 32 GB. I need about 12 GB, therefore it's 64 bit or nothing.


I don't think that standard 32-bit Windows file systems can use the
additional RAM for caching, however: you might check to see whether
some file system on Linux or *BSD does if you're interested.



I was planning to use Veritas Storage Foundation which has a feature Vcache
built in that lets you specify exact amounts of cache for each logical
volume. I guess I should not take for granted that it will not max out
before 12 GB. If I can't cache the file system, then the whole approach
I'm taking is pointless.

--
Will


  #13  
Old September 1st 06, 06:38 AM posted to comp.arch.storage
Curtis Preston
external usenet poster
 
Posts: 3
Default Designing A File Server With Best Price Performance?

As long as you're sticking with Windows as your OS, I would definitely agree
with the previous comment about investing in a TCP Offload Engine (TOE)
card. There are a few of them out there, but the one that's been around the
longest is Alacritech.

I'm not sure I agree with your comment that you can't get CPU bound with
file sharing, since I've seen plenty of really high-end file servers that
cost hundreds of thousands of dollars get CPU-bound. But if I add "for
twenty users" to the end of your sentence, then I'd agree with it. I'm
guessing that's what you meant. Just wanted to clarify the point for anyone
else who might be reading along.

On 8/31/06, Will wrote:

"Bill Todd" wrote in message
news:dvmdnUQAqMVpOWrZnZ2dnUVZ_rGdnZ2d@metrocastcab levision.com...
64 bit is required to get support for more than 4 GB of memory.


Actually, it's not: Intel's IA32 processors (at least in their server
lines) have supported up to 64 GB of RAM since 1996.


Unfortunately, since it is a Windows based file server, what would matter
would be the additional limitations of the Microsoft OS and not the
hardware. For 32-bit Windows Standard Edition server, the maximum memory
utilization appears to be 4 GB. The 64-bit Windows Standard Edition
jumps
that to 32 GB. I need about 12 GB, therefore it's 64 bit or nothing.


I don't think that standard 32-bit Windows file systems can use the
additional RAM for caching, however: you might check to see whether
some file system on Linux or *BSD does if you're interested.



I was planning to use Veritas Storage Foundation which has a feature
Vcache
built in that lets you specify exact amounts of cache for each logical
volume. I guess I should not take for granted that it will not max out
before 12 GB. If I can't cache the file system, then the whole approach
I'm taking is pointless.

--
Will




  #14  
Old September 1st 06, 07:52 AM posted to comp.arch.storage
Will
external usenet poster
 
Posts: 338
Default Designing A File Server With Best Price Performance?

"Curtis Preston" wrote in message
news:mailman.0.1157089107.29532.test2_wcurtisprest ...
As long as you're sticking with Windows as your OS, I would definitely

agree
with the previous comment about investing in a TCP Offload Engine (TOE)
card. There are a few of them out there, but the one that's been around

the
longest is Alacritech.


This works as the ethernet card, or does it supplement the ethernet card
already in the system? All I can remember about TCP offloading is that
when I enabled that feature on the old Compaq NC6134 fiber optic ethernet
cards, every TCP packet was showing checksum errors in a sniffer. So much
for automation. I ended up turning off TCP offloading and it seemed to
work without errors after that.


I'm not sure I agree with your comment that you can't get CPU bound with
file sharing, since I've seen plenty of really high-end file servers that
cost hundreds of thousands of dollars get CPU-bound. But if I add "for
twenty users" to the end of your sentence, then I'd agree with it. I'm
guessing that's what you meant. Just wanted to clarify the point for

anyone
else who might be reading along.


Just curious were those CPU bound servers Solaris boxes that were spawning
processes for each connected user? If the I/O is asynchronous and you
limit the number of threads and processes that are being context-switched,
you should bottleneck on I/O (file, network, or both) before you hit any CPU
limits. If you give the CPU hundreds of spawning processes, then of
course you are going to overwhelm it at some point just in context switches.
Fortunately, Windows server uses asynchronous file I/O.

--
Will


  #15  
Old September 1st 06, 02:06 PM posted to comp.arch.storage
Hans Jørgen Jakobsen
external usenet poster
 
Posts: 5
Default Designing A File Server With Best Price Performance?

On Thu, 31 Aug 2006 23:52:28 -0700, Will wrote:
"Curtis Preston" wrote in message
news:mailman.0.1157089107.29532.test2_wcurtisprest ...
As long as you're sticking with Windows as your OS, I would definitely

agree
with the previous comment about investing in a TCP Offload Engine (TOE)
card. There are a few of them out there, but the one that's been around

the
longest is Alacritech.


This works as the ethernet card, or does it supplement the ethernet card
already in the system? All I can remember about TCP offloading is that
when I enabled that feature on the old Compaq NC6134 fiber optic ethernet
cards, every TCP packet was showing checksum errors in a sniffer. So much
for automation. I ended up turning off TCP offloading and it seemed to
work without errors after that.


Was the sniffing done on the sending machine?
On a machine with TOE you will see errors on outgoing packets because the
checksum will first be calculated when the packet hits the card. And the
sniffer (tcpdump) takes a copy before.
/hjj
  #16  
Old September 1st 06, 09:49 PM posted to comp.arch.storage
Will
external usenet poster
 
Posts: 338
Default Designing A File Server With Best Price Performance?

"Hans Jørgen Jakobsen" wrote in message
...
Was the sniffing done on the sending machine?
On a machine with TOE you will see errors on outgoing packets because the
checksum will first be calculated when the packet hits the card. And the
sniffer (tcpdump) takes a copy before.


The errors were on receiving, but regarding the sending why would the
sniffer see a different packet than the card does? The sniffer is on the
machine that has the TOE card, and it uses the TOE card to see the packet.
No doubt I don't understand the concepts here well enough.

--
Will


  #17  
Old September 2nd 06, 03:59 AM posted to comp.arch.storage
Moojit
external usenet poster
 
Posts: 7
Default Designing A File Server With Best Price Performance?


"Will" wrote in message
...
"Moojit" wrote in message
...
I think you may be focusing on the wrong areas. DISK performance and NIC
performance should be the priority, less on 64bit. If you get a NIC that
has it's own TOE, 64 bit and processor become less important. For the
DISKS, a RAID controller that supports WRITE BACK cache in addition to

READ
cache will be optimal.


64 bit is required to get support for more than 4 GB of memory. It's the
extra memory I want, to cache the most commonly used data for fast reads,
not the extra processing capabilities of 64 bit CPUs.


Actually, this is not true. A couple things to consider ...

1. You can enable the extended address memory support on a IA32 processor
by using the /PAE switch in the boot.ini file (processor switches to a 36
bit address bus vs. 32 bit address bus). This will get you over the 4GB
boundary.
2. The Windows kernel allocates 2GB of memory maximum per user mode process
(most of this vitualized memory resides in paging file).
3. Applications can be customized to go beyond this 2GB boundary using
special Win32 API calls. Enterprise class database applications such as
Oracle and SQL take advantage of this. But in general, most applications
do not.

Da Moojit



The application does not involve heavy write activity, so I don't think we
would benefit much from battery backed write cache.

--
Will


"Will" wrote in message
...
We are migrating shared files off of a Windows domain controller to a
discrete file server, and I'm thinking through what would be the best
design
for that new server. Given that 64 bit operating systems are now

here,
I'm thinking we do not need to spend much on fast drives, but we should
instead invest in the lowest cost 64 bit server (a file server won't
bottleneck on CPU so even a 1 GHz machine would be fast enough) and
install
gigabit ethernet and lots of memory. I'm sure that the most

requested
files would fit into a memory cache that is under 10 GB in size. A
computer with 12 GB of memory, 64 bit Windows 2003 server web edition,

and
gigabit ethernet should provide the best possible speed for the case

where
there is not much write activity, but lots of read activity on less
than
10
GB of data.

Assume less than 20 users, heavy read activity, very low write
activity.
Disk with all file shares would be under 100 GB, but under 10 GB of
that
represents 95% of the activity.

Is my design correct if I want to maximize performance for this small
network?

--
Will








  #18  
Old September 2nd 06, 06:47 AM posted to comp.arch.storage
Will
external usenet poster
 
Posts: 338
Default Designing A File Server With Best Price Performance?

"Moojit" wrote in message
...
64 bit is required to get support for more than 4 GB of memory. It's

the
extra memory I want, to cache the most commonly used data for fast

reads,
not the extra processing capabilities of 64 bit CPUs.


Actually, this is not true. A couple things to consider ...

1. You can enable the extended address memory support on a IA32 processor
by using the /PAE switch in the boot.ini file (processor switches to a 36
bit address bus vs. 32 bit address bus). This will get you over the 4GB
boundary.
2. The Windows kernel allocates 2GB of memory maximum per user mode

process
(most of this vitualized memory resides in paging file).
3. Applications can be customized to go beyond this 2GB boundary using
special Win32 API calls. Enterprise class database applications such as
Oracle and SQL take advantage of this. But in general, most applications
do not.


So let's distinguish empirical from theoretical.

Empirically, if I want to budget for Windows Server 2003 Standard Edition,
and my application is a file server using the built in Windows protocols,
then I need a 64 bit edition in order to get more than 4 GB.

Empirically, Microsoft did not build the 32 bit versions of Standard Edition
to use more than 4 GB for file sharing. They could have. They did not.

--
Will


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Seagate Barracuda 160 GB IDE becomes corrupted. RMA? Dan_Musicant Storage (alternative) 79 February 28th 06 08:23 AM
my dvd burner keeps having problems nullboy Cdr 3 September 9th 05 01:46 AM
Can't format CD-R [DLA] doorlight Cdr 12 June 4th 05 02:12 AM
BenQ Dw1620 Dvd burner problem oops Cdr 2 May 20th 05 04:40 AM
Can't get CD Burner to Burn Nottoman General 2 December 22nd 03 05:47 PM


All times are GMT +1. The time now is 09:27 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.