If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
Network Storage Help
In article ,
Faeandar wrote: I disagree with this. Performance is never better than with local scsi storage. DAS is still king when it comes to raw performance. Well, you'd think so, but it's not actually true. For one thing, you can't really fit eighty or a hundred spindles inside that server case, and if you find an external enclosure that works you'll need to move to fibre channel anyway and lose the theoretical speed advantage of a direct scsi 320 connection. (Maybe if they actually *had* scsi 1280 I'd agree with you :-) And a SAN is (at least usually) a lot more than just an external storage cabinet and a fibre strand. These things have multiple raid levels, even custom raid arrangements, gigabytes of battery backed cache and sophisticated caching algorithms (one reason throughput to the SAN can be faster than direct I/O to a local disk. You don't actually hit a disk), multiple hot spares, redundant controller heads, all kinds of stuff that's pretty hard to engineer into a standalone server. Couple that level of architecture with an FC fabric and it's hard to argue that any kind of point storage is technically superior. Reliability is actually better in most DAS environments because you have fewer bits in the middle. No shared cache to get corrupted. No zoning to go bat**** and lip storm. Things like that. Hmmmm, well, do you think that an FC fabric is inherently less reliable than a 40 foot scsi cable? I'm not sure what you mean by shared cache corruption; the cache in something like an EMC just doesn't somehow "get corrupted" any more than your server memory somehow "gets corrupted". |
#22
|
|||
|
|||
Network Storage Help
In article ,
Faeandar wrote: My experience is that applications that claim they need block access are incorrect. That's only trivially true, in that applications don't generally require block access. What you're supplying block access to is usually the OS which is in turn interacting with the application via something like Veritas or OCFS. Certain NFS implementations are approved for use with Oracle, but there's a performance degradation over a direct access san device. I don't actually see how you could build a petabyte data warehouse on top of NFS. IMHO it's just too inefficient. And Oracle *recommends* using NFS with Oracle RAC, so clustered Oracle on NAS is preferred. Do you have a reference for that? I don't remember that as true as of 10g. The only things I can think of that *need* _shared_ block level storage are clustered file systems. Other that that, everything I've ever run across will work on NAS. A large mail spool won't. In fact, anything that depends on reliable file locking will almost certainly break over an NFS link. I'm aware of the improvements in V4; they still don't support clustered file systems well. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Network Storage | [email protected] | Storage (alternative) | 1 | June 16th 06 07:17 PM |
Looking for network print server that also provides network connected storage and other features. | G.L. Cross | Printers | 0 | January 16th 06 06:03 PM |
Network storage for home network (wifi or not?) | [email protected] | Storage & Hardrives | 27 | January 13th 06 12:40 PM |
Network storage for home network (wifi or not?) | [email protected] | Storage (alternative) | 28 | January 13th 06 12:40 PM |
Network Storage? | [email protected] | Storage (alternative) | 2 | September 5th 05 06:04 PM |