If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Designing A File Server With Best Price Performance?
We are migrating shared files off of a Windows domain controller to a
discrete file server, and I'm thinking through what would be the best design for that new server. Given that 64 bit operating systems are now here, I'm thinking we do not need to spend much on fast drives, but we should instead invest in the lowest cost 64 bit server (a file server won't bottleneck on CPU so even a 1 GHz machine would be fast enough) and install gigabit ethernet and lots of memory. I'm sure that the most requested files would fit into a memory cache that is under 10 GB in size. A computer with 12 GB of memory, 64 bit Windows 2003 server web edition, and gigabit ethernet should provide the best possible speed for the case where there is not much write activity, but lots of read activity on less than 10 GB of data. Assume less than 20 users, heavy read activity, very low write activity. Disk with all file shares would be under 100 GB, but under 10 GB of that represents 95% of the activity. Is my design correct if I want to maximize performance for this small network? -- Will |
#2
|
|||
|
|||
Designing A File Server With Best Price Performance?
Will wrote: We are migrating shared files off of a Windows domain controller to a discrete file server, and I'm thinking through what would be the best design for that new server. Given that 64 bit operating systems are now here, I'm thinking we do not need to spend much on fast drives, but we should instead invest in the lowest cost 64 bit server (a file server won't bottleneck on CPU so even a 1 GHz machine would be fast enough) and install gigabit ethernet and lots of memory. I'm sure that the most requested files would fit into a memory cache that is under 10 GB in size. A computer with 12 GB of memory, 64 bit Windows 2003 server web edition, and gigabit ethernet should provide the best possible speed for the case where there is not much write activity, but lots of read activity on less than 10 GB of data. Assume less than 20 users, heavy read activity, very low write activity. Disk with all file shares would be under 100 GB, but under 10 GB of that represents 95% of the activity. Is my design correct if I want to maximize performance for this small network? If you've correctly characterized your workload, it's not an unreasonable plan, except I didn't think MS has done a 64 bit WS2003 Web Server Edition yet, and second, the Web Server Edition is *not* licensed for local file serving. 64-bit WS2003 Standard Edition is probably what you're looking for. Nonetheless, reasonably fast drives will help at initial loads and when hitting files that don't cache. But I'd probably be more concerned about drive reliability than performance. A mirrored pair of server class SATA drives of the requisite capacity (but not the fastest available) are probably a much better idea than cheaping out and putting a couple of desktop drives in there. Second on a server, and *especially* with that much RAM, I'd absolutely insist on ECC'd memory. |
#3
|
|||
|
|||
Designing A File Server With Best Price Performance?
Will wrote:
We are migrating shared files off of a Windows domain controller to a discrete file server, and I'm thinking through what would be the best design for that new server. Given that 64 bit operating systems are now here, I'm thinking we do not need to spend much on fast drives, but we should instead invest in the lowest cost 64 bit server (a file server won't bottleneck on CPU so even a 1 GHz machine would be fast enough) and install gigabit ethernet and lots of memory. I'm sure that the most requested files would fit into a memory cache that is under 10 GB in size. A computer with 12 GB of memory, 64 bit Windows 2003 server web edition, and gigabit ethernet should provide the best possible speed for the case where there is not much write activity, but lots of read activity on less than 10 GB of data. Assume less than 20 users, heavy read activity, very low write activity. Disk with all file shares would be under 100 GB, but under 10 GB of that represents 95% of the activity. Is my design correct if I want to maximize performance for this small network? 10GB of RAM sounds like overkill. File servers of the size you're talking about are generally well underutilized. I've seen VMWare servers with 4GB of RAM running 10 Windows file servers serving more users per server than you are looking for. If money is no object, but all means go for it. If you're looking to size your server to your workload you could probably save yourself several thousand dollars by going with a lower end box with less RAM. Also, you don't mention what type of data you'll be working with, but typically file servers are very tolerant of disk latency. If an end user opening a file sees a 50ms delay instead of a 4ms delay, do they really notice the difference? If you're working with large sound or video files and that's a concern, you could go with 15K rpm SCSI disks. If not, you're probably going to be okay with lower cost and higher capacity SATA disks. |
#4
|
|||
|
|||
Designing A File Server With Best Price Performance?
"Jon Metzger" wrote in message
... 10GB of RAM sounds like overkill. File servers of the size you're talking about are generally well underutilized. I've seen VMWare servers with 4GB of RAM running 10 Windows file servers serving more users per server than you are looking for. If money is no object, but all means go for it. If you're looking to size your server to your workload you could probably save yourself several thousand dollars by going with a lower end box with less RAM. Point well taken. What tools are available for a Windows environment to help us profile what amount of the file system is actually being read over the course of a week? -- Will |
#5
|
|||
|
|||
Designing A File Server With Best Price Performance?
wrote in message
oups.com... If you've correctly characterized your workload, it's not an unreasonable plan, except I didn't think MS has done a 64 bit WS2003 Web Server Edition yet, and second, the Web Server Edition is *not* licensed for local file serving. It looks like you are correct. What a pity. Not very good marketing on Microsoft's part. I would probably try to get more life out of a Windows 2000 box than force an expensive software upgrade. -- Will |
#6
|
|||
|
|||
Designing A File Server With Best Price Performance?
Will wrote:
"Jon Metzger" wrote in message ... 10GB of RAM sounds like overkill. File servers of the size you're talking about are generally well underutilized. I've seen VMWare servers with 4GB of RAM running 10 Windows file servers serving more users per server than you are looking for. If money is no object, but all means go for it. If you're looking to size your server to your workload you could probably save yourself several thousand dollars by going with a lower end box with less RAM. Point well taken. What tools are available for a Windows environment to help us profile what amount of the file system is actually being read over the course of a week? Check out Perfmon, it's built in to Windows. It may be a little tricky to set up, but all the information you'll need you should be able to find there. |
#7
|
|||
|
|||
Designing A File Server With Best Price Performance?
"Jon Metzger" wrote in message
... Check out Perfmon, it's built in to Windows. It may be a little tricky to set up, but all the information you'll need you should be able to find there. I don't think Perfmon would track anything more than low level performance characteristics like number of reads and writes on a logical or physical volume. How is that going to help me determine that 95% of the reads are on xx GB of the file system? -- Will |
#8
|
|||
|
|||
Designing A File Server With Best Price Performance?
Will wrote:
"Jon Metzger" wrote in message ... Check out Perfmon, it's built in to Windows. It may be a little tricky to set up, but all the information you'll need you should be able to find there. I don't think Perfmon would track anything more than low level performance characteristics like number of reads and writes on a logical or physical volume. How is that going to help me determine that 95% of the reads are on xx GB of the file system? Sorry, I misread your original question...PerfMon probably won't get you that level of detail. I'm guessing you're wanting to know this statistic in order to size your RAM so that all of this "most read" data will remain in filesystem cache. I think you may be over-engineering your solution. I've seen people insist upon 15K rpm SCSI disks for fileservers that have since been migrated to ATA without anyone knowing the difference. With that said, having hard data to plan with is always better than guessing. I'd watch PerfMon for reads and writes per second and KB transferred and purchase disks which can handle that load plus some headroom for growth. The filesystem cache is all well and good, but it's a good idea to plan for your disk to handle the load without it. Then any performance gain you get from cache is a bonus. I think that's the safest approach to your problem, and it should be less expensive than your original plan to boot. |
#9
|
|||
|
|||
Designing A File Server With Best Price Performance?
I think you may be focusing on the wrong areas. DISK performance and NIC
performance should be the priority, less on 64bit. If you get a NIC that has it's own TOE, 64 bit and processor become less important. For the DISKS, a RAID controller that supports WRITE BACK cache in addition to READ cache will be optimal. Da Moojit "Will" wrote in message ... We are migrating shared files off of a Windows domain controller to a discrete file server, and I'm thinking through what would be the best design for that new server. Given that 64 bit operating systems are now here, I'm thinking we do not need to spend much on fast drives, but we should instead invest in the lowest cost 64 bit server (a file server won't bottleneck on CPU so even a 1 GHz machine would be fast enough) and install gigabit ethernet and lots of memory. I'm sure that the most requested files would fit into a memory cache that is under 10 GB in size. A computer with 12 GB of memory, 64 bit Windows 2003 server web edition, and gigabit ethernet should provide the best possible speed for the case where there is not much write activity, but lots of read activity on less than 10 GB of data. Assume less than 20 users, heavy read activity, very low write activity. Disk with all file shares would be under 100 GB, but under 10 GB of that represents 95% of the activity. Is my design correct if I want to maximize performance for this small network? -- Will |
#10
|
|||
|
|||
Designing A File Server With Best Price Performance?
"Moojit" wrote in message
... I think you may be focusing on the wrong areas. DISK performance and NIC performance should be the priority, less on 64bit. If you get a NIC that has it's own TOE, 64 bit and processor become less important. For the DISKS, a RAID controller that supports WRITE BACK cache in addition to READ cache will be optimal. 64 bit is required to get support for more than 4 GB of memory. It's the extra memory I want, to cache the most commonly used data for fast reads, not the extra processing capabilities of 64 bit CPUs. The application does not involve heavy write activity, so I don't think we would benefit much from battery backed write cache. -- Will "Will" wrote in message ... We are migrating shared files off of a Windows domain controller to a discrete file server, and I'm thinking through what would be the best design for that new server. Given that 64 bit operating systems are now here, I'm thinking we do not need to spend much on fast drives, but we should instead invest in the lowest cost 64 bit server (a file server won't bottleneck on CPU so even a 1 GHz machine would be fast enough) and install gigabit ethernet and lots of memory. I'm sure that the most requested files would fit into a memory cache that is under 10 GB in size. A computer with 12 GB of memory, 64 bit Windows 2003 server web edition, and gigabit ethernet should provide the best possible speed for the case where there is not much write activity, but lots of read activity on less than 10 GB of data. Assume less than 20 users, heavy read activity, very low write activity. Disk with all file shares would be under 100 GB, but under 10 GB of that represents 95% of the activity. Is my design correct if I want to maximize performance for this small network? -- Will |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Seagate Barracuda 160 GB IDE becomes corrupted. RMA? | Dan_Musicant | Storage (alternative) | 79 | February 28th 06 08:23 AM |
my dvd burner keeps having problems | nullboy | Cdr | 3 | September 9th 05 01:46 AM |
Can't format CD-R [DLA] | doorlight | Cdr | 12 | June 4th 05 02:12 AM |
BenQ Dw1620 Dvd burner problem | oops | Cdr | 2 | May 20th 05 04:40 AM |
Can't get CD Burner to Burn | Nottoman | General | 2 | December 22nd 03 05:47 PM |