If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
which approach to maximise speed with a SAN?
Hello,
I had a loong list of technical questions over the phone today for a new job. From How do you get the return code from a shellscript (easy - echo $?) to Which approach to maximise speed with a SAN? I dunno, but as I have to meet these guys for a face to face interview on Tues it might be an idea to find out. I can look into a comparison between RAID 1, 5 and 10 and some others all by myself (cos i new about 1 and 5) and some others. But perhaps someone on this list might help me out? TIA Tony |
#2
|
|||
|
|||
|
#3
|
|||
|
|||
tony barnwell wrote:
$?) to Which approach to maximise speed with a SAN? Like all performance issues, find the bottleneck and remove it, then find the next bottleneck. You will need to understand the I/O load, because the bottleneck will be at different places dependent upon the mix. Factors to consider: - Multipathing (i.e. PowerPath, DMP, etc) if the bottleneck is bandwidth or queuing - For highly random i/o's the memory centric enterprise boxes like 9900/DMX/Shark may not bring any performance above the midrange boxes with a shorter path to disk (i.e. FasT, CX600, LSI-based controllers, ...) - Physical layout, both in terms of raid level, striping, and how the LUNs map to physical disks - database and filesystem block sizes - Host-side caching/memory issues like SGA size and buffer cache size - Mapping between logical volumes on the host and underlying LUNs - I/O contention within the SAN ( e.g. Inter Switch Links, contention on the front end of the storage array, contention with tape traffic on the HBA during backups ) Dave |
#4
|
|||
|
|||
dave dickerson wrote in message ...
tony barnwell wrote: $?) to Which approach to maximise speed with a SAN? Like all performance issues, find the bottleneck and remove it, then find the next bottleneck. You will need to understand the I/O load, because the bottleneck will be at different places dependent upon the mix. Factors to consider: - Multipathing (i.e. PowerPath, DMP, etc) if the bottleneck is bandwidth or queuing - For highly random i/o's the memory centric enterprise boxes like 9900/DMX/Shark may not bring any performance above the midrange boxes with a shorter path to disk (i.e. FasT, CX600, LSI-based controllers, ...) - Physical layout, both in terms of raid level, striping, and how the LUNs map to physical disks - database and filesystem block sizes - Host-side caching/memory issues like SGA size and buffer cache size - Mapping between logical volumes on the host and underlying LUNs - I/O contention within the SAN ( e.g. Inter Switch Links, contention on the front end of the storage array, contention with tape traffic on the HBA during backups ) Dave Thanx for both the responses. Comprendo. Getting the idea. Had a look at some Macdata stuff.One comment related to "single 200mb pipe" Should we be looking at Gb pipes ideally ie 5 times faster? And possibly you might want to keep the redundant connections and avoid single point of failure. Liked the comment about iteratively find the bottleneck and fix it. The interview question might have related to design of SAN in which case you have to guess where the bottleneck might be. Should you maybe start with the physical disks,layout (say, RAID10 for speed) and work outwards towards the clients, (probably the most important clients with be databases?) applying the guidelines above? Thanks again Tony |
#5
|
|||
|
|||
Thanx for both the responses. Comprendo. Getting the idea.
Had a look at some Macdata stuff.One comment related to "single 200mb pipe" Should we be looking at Gb pipes ideally ie 5 times faster? And possibly you might want to keep the redundant connections and avoid single point of failure. I hope he meant 2000mbit Doing anything with 1gig equipment would seem like a bad idea since 2gig isn't all that expensive any more. Liked the comment about iteratively find the bottleneck and fix it. The interview question might have related to design of SAN in which case you have to guess where the bottleneck might be. Should you maybe start with the physical disks,layout (say, RAID10 for speed) and work outwards towards the clients, (probably the most important clients with be databases?) applying the guidelines above? If the storage is decent, one thing to look for is overloaded ISLs. -- /Jesper Monsted |
#6
|
|||
|
|||
tony barnwell wrote:
dave dickerson wrote in message ... tony barnwell wrote: $?) to Which approach to maximise speed with a SAN? Dave Thanx for both the responses. Comprendo. Getting the idea. Had a look at some Macdata stuff.One comment related to "single 200mb pipe" Should we be looking at Gb pipes ideally ie 5 times faster? And possibly you might want to keep the redundant connections and avoid single point of failure. Sounds like you may be confusing Mbits/Gbits with Mbytes/Gbytes in some of the literature. The "single 200mb pipe" is referring to a 2Gbit link. -- Nik Simpson |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
How can I get advertised burn speed? | Charles Howse | Cdr | 11 | November 22nd 03 04:48 PM |
Check my RAM Speed | Ben Pope | Homebuilt PC's | 0 | October 24th 03 06:14 PM |
What is "Built-in AI Auto Speed Adjustment" in Asus DVD-ROM drive + can it hamper performance?!!.. | Booser | Cdr | 1 | October 18th 03 05:51 AM |
FSB Speed and Memory speed | John A | Homebuilt PC's | 3 | September 7th 03 07:01 PM |
CD burning speed determines read speed? | David K | General | 4 | July 22nd 03 09:31 AM |