View Single Post
  #3  
Old January 31st 04, 01:39 PM
Jan-Frode Myklebust
external usenet poster
 
Posts: n/a
Default

In article , Jason Mather wrote:

You can have multiple hosts on one SCSI bus, thats how a lot of clusters
are built. Just need to make sure terminators are at the ends, and adapters
have distinct SCSI IDs. No need for FC.


I just tried a setup like this, but concluded that it wasn't such
a good idea after all. It seemed to be working fine when both nodes
were up, but the problem was when the nodes were booted/power-cycled.
That lead to a few scsi-resets, and some times hangs during scsi-card
initialization. After searching some high-availability mailinglists,
the consensus seemed to be that multi-homed scsi maybe wasn't such a
good idea after all..

My setup was 2 Dell PowerEdge 2650, with Adaptec aic7899 SCSI
controllers, and one dumb external SCSI disk.

Now I've replaced the dumb SCSI-disk with a Nexsan ATABoy2 where we
have two independent SCSI channels to the disk. That seems much more
reliable, but the ATABoy2 is a bit expensive..

So, my question, are people really running simple setups with two
hosts accessing the same disks on the same SCSI-bus, and if so, are
there special settings that needs to be done on the SCSI controller?


-jf