If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
Hello,
Just a quick info gathering. I am at a customer site installing a new HDS 9990. The high level config overview: HDS 9990 (Open V 40GB LUNS) HDS 9200 (Legacy Array) Sun Fire v880 Brocade 4100's (2 Fabrics) QLogic 2GB Cards (375-3102) to new SAN JNI FCE2-6412 cards to old HDS 9200 Array MPXIO enabled and configured for Round Robin VxVM 4.1 Oracle 9i During this phased implementation, we are in the date migration stage. We are mirroring the old storage, which is from a HDS 9200 to the new LUNS on the TS (9990). Once the mirroring is complete, we will break of the plexes from the old array and be fully migrated to the new Hitachi. The customer decided not to break the mirrors yet. We have noticed a decrease in write and read performance on all the volumes on the host. I would expect a slight decrease in write performance, however, we are seeing upto a 1/5 milli-second increase in read time as well on each of the volumes. My assumption is that because of the double writes to two different (types) of LUNS, that is impacting our reads. Suggestions? |
#2
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
On 17 Oct 2005 13:42:10 -0700, "Greg Brown"
wrote: Hello, Just a quick info gathering. I am at a customer site installing a new HDS 9990. The high level config overview: HDS 9990 (Open V 40GB LUNS) HDS 9200 (Legacy Array) Sun Fire v880 Brocade 4100's (2 Fabrics) QLogic 2GB Cards (375-3102) to new SAN JNI FCE2-6412 cards to old HDS 9200 Array MPXIO enabled and configured for Round Robin VxVM 4.1 Oracle 9i During this phased implementation, we are in the date migration stage. We are mirroring the old storage, which is from a HDS 9200 to the new LUNS on the TS (9990). Once the mirroring is complete, we will break of the plexes from the old array and be fully migrated to the new Hitachi. The customer decided not to break the mirrors yet. We have noticed a decrease in write and read performance on all the volumes on the host. I would expect a slight decrease in write performance, however, we are seeing upto a 1/5 milli-second increase in read time as well on each of the volumes. My assumption is that because of the double writes to two different (types) of LUNS, that is impacting our reads. Suggestions? What do you mean "types of LUN's"? Is one raid 5 and the other 1+0? For 1/5 millisecond (isn't that like 20 microseconds?) could it be the switch? A V880 will throw out alot of data. It could also be cache getting in your way. If the algorithm is not suited to your access you could be spending time seeking on spindles and loading into cache data that doesn't matter to you. Also, doesn't the Lightning line have an Oracle verification built in somewhere? Like firmware or something. If so, and it's enabled, it could be introducing latency. ~F |
#3
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
The thing that comes to mind for me is the fact that you are migrating
from an old technology midrange array to a very highly scalable new technology enterprise array. I imagine this means that the workload previously supported by the 9200 is relatively low and probably not enough to even register on the 9990. The 9990 should be capable of supporting many times the workload of the 9200. What tools are you using to measure disk I/O performance? Do you have any tool to measure array performance? It would be interesting to know the IO/s load and some cache utilization statistics. Vic |
#4
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
That's a good point. I will check into the Ora Verification today as
well as the cache and update this posting. The type of LUNS are both RAID-5, however the LUNS from the 9200 are spread across multiple columns (6-8 disks per LUN) as opposed to the LDEVS from the TagmaStore (9990) which comes from only 4 disks in the raid group. That could be part (not all) of the issue. Thanks for your help. Greg |
#5
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
Hello Vic,
I am using vxstat to gather this information on a per Veritas volume basis. I do not have any tool that I am using to gather performance metrics from the TagmaStore 9990. Thanks, Greg |
#6
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
That's a good point. I will check into the Ora Verification today as
well as the cache and update this posting. The type of LUNS are both RAID-5, however the LUNS from the 9200 are spread across multiple columns (6-8 disks per LUN) as opposed to the LDEVS from the TagmaStore (9990) which comes from only 4 disks in the raid group. That could be part (not all) of the issue. Thanks for your help. Greg |
#7
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
That's a good point. I will check into the Ora Verification today as
well as the cache and update this posting. The type of LUNS are both RAID-5, however the LUNS from the 9200 are spread across multiple columns (6-8 disks per LUN) as opposed to the LDEVS from the TagmaStore (9990) which comes from only 4 disks in the raid group. That could be part (not all) of the issue. Thanks for your help. Greg |
#8
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
Hello Faeandar,
That's a good point. I will check into the Ora Verification today as well as the cache and update this posting. The type of LUNS are both RAID-5, however the LUNS from the 9200 are spread across multiple columns (6-8 disks per LUN) as opposed to the LDEVS from the TagmaStore (9990) which comes from only 4 disks in the raid group. That could be part (not all) of the issue. Thanks for your help. Greg |
#9
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
Something also to note is my Queue Depth setting we set on the host.
Currently I set the Queue Depth to 20 (/etc/system file set ssd:ssd_max_throttle=20, rebooted of course). I calculated this number by using the formula for the Queue Depth per fiber port on the TagmaStore 9990, which is 1024, divided by the total number of LUNS mapped to that port on the FED (Front End Director). On the Thunder (9200) it is my understanding that the Queue Depth per fiber port is 256, so before our changes, it was set to: sd:sd_max_throttle setting was 8. I am not sure why we decided to change the sd driver in addition adding the ssd setting. The 9200 uses the sd driver and the 9990 uses the ssd driver. What I would like to do is set the sd:sd_max_throttle setting back to 8 and leave the ssd setting at 20 and see if this improves performance. I think what we are seeing is that changing the sd parameter, we are over queuing the JNI HBA's for the 9200 and that is causing out read & write latency. This is one avenue I am exploring at the moment. Greg |
#10
|
|||
|
|||
Hitachi 9990 & MPXIO Performance
Something also to note is my Queue Depth setting we set on the host.
Currently I set the Queue Depth to 20 (/etc/system file set ssd:ssd_max_throttle=20, rebooted of course). I calculated this number by using the formula for the Queue Depth per fiber port on the TagmaStore 9990, which is 1024, divided by the total number of LUNS mapped to that port on the FED (Front End Director). In this case: 1024 Queue Depth for the Fiber port divided by 58 total LUNS mapped to ports 7A & 8A which gives us 17.65 rounded up to 20. On the Thunder (9200) it is my understanding that the Queue Depth per fiber port is 256, so before our changes, it was set to: sd:sd_max_throttle setting was 8. I am not sure why we decided to change the sd driver in addition adding the ssd setting. The 9200 uses the sd driver and the 9990 uses the ssd driver. What I would like to do is set the sd:sd_max_throttle setting back to 8 and leave the ssd setting at 20 and see if this improves performance. I think what we are seeing is that changing the sd parameter, we are over queuing the JNI HBA's for the 9200 and that is causing out read & write latency. This is one avenue I am exploring at the moment. Greg |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Xbox360, PS3 : announcements of the death of PC gaming seem premature........ | mace | Nvidia Videocards | 10 | June 30th 05 08:25 PM |
Poll (please): Time-shifting Performance | Bryan Hoover | Ati Videocards | 1 | December 15th 04 11:56 PM |
G400 & G-series RR performance question. | Kevin Lawton | Matrox Videocards | 6 | May 20th 04 09:51 PM |
2D performance ATI compared to Matrox | Jo Vermeulen | General | 17 | January 14th 04 07:25 PM |
Geforce 4 2D/desktop performance in WinXP | zmike6 | Nvidia Videocards | 2 | August 29th 03 07:41 AM |