If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
MSA 1500 Performance Question
Hi everyone,
I recently migrated to an MSA1500 based SAN. I have the following config: MSA1500 with redundant controller, but did not purchase the extra 128MB cache upgrade (DOH) 2x MSA30's loaded with 72GB 15k U320 drives (all LUNS are 0+1) 1 MSA20 with 7.2k 160GB SATA drives (1 LUN and it is raid5) 1 4 port Brocade 2GB switch 1 Server running 2003 Enterprise (For now) Now, everything works, but the performance seems really subpar. When I do a backup from the MSA30 to the MSA20, I get like 7 MB/s. That's truly awful. I'd be better off backing up to USB at that rate. So I ran some benchmarks with Iometer, and it claims to be able to do like 100MB/s from both the MSA30 and MSA20. However, when I do drag n' drop benchmarks in Explorer, I get the following results: Cache set to 20% Read / 80% Write ---------------------------------------------------- MSA30 to Local C: 9.8 MB/s Local C: to MSA30 33 MB/s MSA20 to Local C: 19.25 MB/s Local C: to MSA20 12.25 MB/s MSA30 to MSA20 8.16 MB/s MSA20 to MSA30 18.58 MB/s Cache set to 80% Read / 20% Write --------------------------------------------------- MSA30 to Local C: 29.94 MB/s Local C: to MSA30 19.96 MB/s MSA20 to Local C: 59.8 MB/s Local C: to MSA20 8.29 MB/s MSA30 to MSA20 6.73 MB/s MSA20 to MSA30 19.96 MB/s So it looks like the cache REALLY matters. I am going to put in the cache upgrade based on this. However, the fact that Iometer reports such different results than doing drag n' drops, that makes me wonder if there is something in Windows that I could tweak? Any thoughts would be appreciated. |
#2
|
|||
|
|||
MSA 1500 Performance Question
Apeulus Rex skrev: Hi everyone, I recently migrated to an MSA1500 based SAN. I have the following config: MSA1500 with redundant controller, but did not purchase the extra 128MB cache upgrade (DOH) 2x MSA30's loaded with 72GB 15k U320 drives (all LUNS are 0+1) 1 MSA20 with 7.2k 160GB SATA drives (1 LUN and it is raid5) 1 4 port Brocade 2GB switch 1 Server running 2003 Enterprise (For now) Now, everything works, but the performance seems really subpar. When I do a backup from the MSA30 to the MSA20, I get like 7 MB/s. That's truly awful. I'd be better off backing up to USB at that rate. So I ran some benchmarks with Iometer, and it claims to be able to do like 100MB/s from both the [snip] What load parameters are you using in IOmeter? I can probably achieve about 100 MB/s sustained read on a Windows box connected to pretty much any raid system with a few drives, if I configured the load for huge sequential reads. However, that doesn't mean I can get 100 MB/s from that same Windows box and raid system for any arbitrary application. |
#3
|
|||
|
|||
MSA 1500 Performance Question
Hi Charles,
That's true, I am not doing any random in the test. Wouldn't backing up the filesystem be sequential though? That's the thing I'm really concerned about. Sean wrote in message ups.com... Apeulus Rex skrev: Hi everyone, I recently migrated to an MSA1500 based SAN. I have the following config: MSA1500 with redundant controller, but did not purchase the extra 128MB cache upgrade (DOH) 2x MSA30's loaded with 72GB 15k U320 drives (all LUNS are 0+1) 1 MSA20 with 7.2k 160GB SATA drives (1 LUN and it is raid5) 1 4 port Brocade 2GB switch 1 Server running 2003 Enterprise (For now) Now, everything works, but the performance seems really subpar. When I do a backup from the MSA30 to the MSA20, I get like 7 MB/s. That's truly awful. I'd be better off backing up to USB at that rate. So I ran some benchmarks with Iometer, and it claims to be able to do like 100MB/s from both the [snip] What load parameters are you using in IOmeter? I can probably achieve about 100 MB/s sustained read on a Windows box connected to pretty much any raid system with a few drives, if I configured the load for huge sequential reads. However, that doesn't mean I can get 100 MB/s from that same Windows box and raid system for any arbitrary application. |
#4
|
|||
|
|||
MSA 1500 Performance Question
Sean Howard skrev: Hi Charles, That's true, I am not doing any random in the test. Wouldn't backing up the filesystem be sequential though? That's the thing I'm really concerned about. Yes, but on an average windows box with user files, the files are fairly small and numerous. I usually estimate I can get 20-25 MB/s at best from a filesystem with small (10-100 kB) files. For profiles directories this drops substantionally, and for database dumps (hundreds of MB) this figure can double easily. It might be sequential read on a high level doing backup, but still every file has to be opened, read and closed. Not as sequential as you might want. The advantage of using a disk as a backup device as I see it is that I can pull multiple file systems from multiple clients in parallel, without worrying about keeping a physical drive streaming as the file systems vary in performance quite a lot. If you only have one filesystem to backup, the bottleneck might not be in the target disk, instead in the file system itself. You might want to check for fragmentation and see if that has any effect. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Examining Intel's Woodcrest performance claims on TPC-C, Floating point, Integer, Java, Web, HPC and application | sharikou | AMD x86-64 Processors | 0 | June 8th 06 10:26 PM |
Dawn of War Winter Assault performance issue question | NightSky 421 | Ati Videocards | 9 | October 4th 05 01:53 AM |
Mouse performance question | mike | Nvidia Videocards | 11 | August 30th 05 02:19 PM |
RAID performance question | Neal Matthis | Storage (alternative) | 16 | September 7th 03 11:37 PM |
Building a new system: SCSI or IDE? | Jonathan Sachs | Storage (alternative) | 48 | August 5th 03 07:11 PM |