A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

how many servers can connect to direct attached scsi storage



 
 
Thread Tools Display Modes
  #1  
Old April 14th 04, 09:07 AM
Ken Shaw
external usenet poster
 
Posts: n/a
Default how many servers can connect to direct attached scsi storage

I'm trying to figure out how many servers I can connect to a direct attached
hard disk SCSI array.

I've been told its only two - is this true?

Cheers,

Ken


  #2  
Old April 14th 04, 09:34 AM
Rob Turk
external usenet poster
 
Posts: n/a
Default

"Ken Shaw" wrote in message
. ..
I'm trying to figure out how many servers I can connect to a direct

attached
hard disk SCSI array.

I've been told its only two - is this true?

Cheers,

Ken


You can attach as many devices to a SCSI bus as there are SCSI addresses
available. On a SCSI bus there are 16 (or 8 for narrow SCSI) addresses in
total. Assuming your array takes up a single address (no fancy
virtualisation) that leaves room for 15 other devices. Those can be any SCSI
device or server, so at most you can attach 15 servers, in theory.

In real life you will be limited by cable length, performance and by a
mechanism to actually share the information on the disk array.

Rob


  #3  
Old April 14th 04, 09:38 AM
Ken Shaw
external usenet poster
 
Posts: n/a
Default

Thanks for that.

So if I wanted to have 3 servers accessing a shared RAID array with 8 HDD's
in that RAID array - I could do that?

Cheers,

Ken

"Rob Turk" wrote in message
...
"Ken Shaw" wrote in message
. ..
I'm trying to figure out how many servers I can connect to a direct

attached
hard disk SCSI array.

I've been told its only two - is this true?

Cheers,

Ken


You can attach as many devices to a SCSI bus as there are SCSI addresses
available. On a SCSI bus there are 16 (or 8 for narrow SCSI) addresses in
total. Assuming your array takes up a single address (no fancy
virtualisation) that leaves room for 15 other devices. Those can be any

SCSI
device or server, so at most you can attach 15 servers, in theory.

In real life you will be limited by cable length, performance and by a
mechanism to actually share the information on the disk array.

Rob




  #4  
Old April 14th 04, 10:34 AM
Rob Turk
external usenet poster
 
Posts: n/a
Default

"Ken Shaw" wrote in message
...
Thanks for that.

So if I wanted to have 3 servers accessing a shared RAID array with 8

HDD's
in that RAID array - I could do that?

Cheers,

Ken


Yes, but do note that this is on a hardware level only. If you create file
systems on the array and expect all servers to be able to access those
simultaneously then you need additional software. Most file systems like
NTFS are not designed to have multiple computers changing data on it. Each
system assumes it has exclusive access to all file structures. When one
system makes changes, the other systems will not know about the changes and
may overwrite them. Trying to just hook it all up will result in data
corruption.

Check out Veritas Sanpoint or Sistina GFS for details. There are other
companies offering similar functionality. Google:
http://www.google.com/search?sourcei...ed+file+system for more details.

Rob


  #5  
Old April 14th 04, 11:04 AM
Ken Shaw
external usenet poster
 
Posts: n/a
Default

Thanks for that Rob.

A lot of the links that come from that google search are about SAN
technology. If I'm sharing a SCSI array with just a few machines is this
classed as a SAN? Do you know if Windows Datacentre Server handles the
connection of direct attached storage to multiple machines? From what I
understand - windows clusters with SQL Server can only be set up if you're
using direct attached storage. So can the OS handle the behaviour you're
talking about?

Cheers,

Ken

"Rob Turk" wrote in message
...
"Ken Shaw" wrote in message
...
Thanks for that.

So if I wanted to have 3 servers accessing a shared RAID array with 8

HDD's
in that RAID array - I could do that?

Cheers,

Ken


Yes, but do note that this is on a hardware level only. If you create file
systems on the array and expect all servers to be able to access those
simultaneously then you need additional software. Most file systems like
NTFS are not designed to have multiple computers changing data on it. Each
system assumes it has exclusive access to all file structures. When one
system makes changes, the other systems will not know about the changes

and
may overwrite them. Trying to just hook it all up will result in data
corruption.

Check out Veritas Sanpoint or Sistina GFS for details. There are other
companies offering similar functionality. Google:

http://www.google.com/search?sourcei...ed+file+system for more details.

Rob




  #6  
Old April 14th 04, 01:27 PM
Nik Simpson
external usenet poster
 
Posts: n/a
Default

Ken Shaw wrote:
Thanks for that Rob.

A lot of the links that come from that google search are about SAN
technology. If I'm sharing a SCSI array with just a few machines is
this classed as a SAN?


Well, yes and no, the main reason that SCSI never really took off as the
basis for shared storage is that it doesn't work particularly well. You can
build a small shared storage cluster around SCSI, but you may well end with
more trouble than you expect with performance in particular.

Do you know if Windows Datacentre Server handles
the connection of direct attached storage to multiple machines?


Clustering in the Windows world is "shared nothing" i.e. if you have two
machines in a cluster, machine 1 exclusively owns its filesystems and
machine 2 exclusively owns its filesystems. If for example machine 1 fails,
machine 2 takes over the filesystems (and associated applications.)

There is no general mechanism in Windows clustering for shared access to a
single filesystem at the SCSI/FC level that requires additional software
such as the examples mentioned by Rob.

I don't beleive that MS supports shared SCSI storage in Windows clustering
anymore because of the problems everybody had trying to get it to work in
the late 90s (I know, I was one of the poor slobs trying :-) These days, MS
clustering (and anybody else's clustering for that matter) pretty much
assumes Fibre attached storage.

From what I understand - windows clusters with SQL Server can only be set
up if
you're using direct attached storage.


Define "direct attached" there is nothing in the Windows clustering model
that requires direct attached as opposed to SAN attached because
functionally (i.e. to the OS) there is no difference between having an
exclusive point-to-point connection from the host to a Fibre array and doing
the same thing via an FC switch.

So can the OS handle the behaviour you're talking about?


If you mean can it handle the locking issues associated with a shared
filesystem, the answer is no, it doesnt even try.

Perhaps you could explain the "high-level" problem that is leading you
towards a shared SCSI storage solution, there may be tother ways of
addressing the problem.


--
Nik Simpson


  #7  
Old April 15th 04, 03:00 AM
Ken Shaw
external usenet poster
 
Posts: n/a
Default

Thanks Nik.

Ok here's where we're at.

We've got 3 dell poweredge servers that are running an ASP.NET app. Thats
connecting to a SQL Server 2000 standalone. The app we're running involves
uploading significant numbers of bytes from teh clients to be stored as
files somewhere.

We need high availability and centralised storage. What I am trying to do
is have failover clustering working on both the database server and the
webservers.

What I was hoping was that the database servers could be in a failover pair
using a shared hard-disk enclosure and that the 3 web-servers could also be
attached to that one hard-disk enclosure.

The space requirements are geared toward the web-servers. We'll be
accepting about 200 gb of files from users using the applciation - whereas
the SQL database will only be a few gigabytes. So the space requirements of
teh file servers are much higher.


Cheers,

Ken



"Nik Simpson" wrote in message
. ..
Ken Shaw wrote:
Thanks for that Rob.

A lot of the links that come from that google search are about SAN
technology. If I'm sharing a SCSI array with just a few machines is
this classed as a SAN?


Well, yes and no, the main reason that SCSI never really took off as the
basis for shared storage is that it doesn't work particularly well. You

can
build a small shared storage cluster around SCSI, but you may well end

with
more trouble than you expect with performance in particular.

Do you know if Windows Datacentre Server handles
the connection of direct attached storage to multiple machines?


Clustering in the Windows world is "shared nothing" i.e. if you have two
machines in a cluster, machine 1 exclusively owns its filesystems and
machine 2 exclusively owns its filesystems. If for example machine 1

fails,
machine 2 takes over the filesystems (and associated applications.)

There is no general mechanism in Windows clustering for shared access to a
single filesystem at the SCSI/FC level that requires additional software
such as the examples mentioned by Rob.

I don't beleive that MS supports shared SCSI storage in Windows clustering
anymore because of the problems everybody had trying to get it to work in
the late 90s (I know, I was one of the poor slobs trying :-) These days,

MS
clustering (and anybody else's clustering for that matter) pretty much
assumes Fibre attached storage.

From what I understand - windows clusters with SQL Server can only be

set
up if
you're using direct attached storage.


Define "direct attached" there is nothing in the Windows clustering model
that requires direct attached as opposed to SAN attached because
functionally (i.e. to the OS) there is no difference between having an
exclusive point-to-point connection from the host to a Fibre array and

doing
the same thing via an FC switch.

So can the OS handle the behaviour you're talking about?


If you mean can it handle the locking issues associated with a shared
filesystem, the answer is no, it doesnt even try.

Perhaps you could explain the "high-level" problem that is leading you
towards a shared SCSI storage solution, there may be tother ways of
addressing the problem.


--
Nik Simpson




 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
OT - free news servers...? Christo General 6 February 18th 05 07:02 PM
NTL to PC and Xbox Scott Homebuilt PC's 8 September 14th 04 09:53 AM
First Build Computer - Need Checklists and Other Tracking Documents Rob Homebuilt PC's 5 July 25th 04 12:03 AM
UPS and number of computers that can connect to single unit Jeff Homebuilt PC's 4 October 26th 03 07:10 PM
EFFECTIVE ON SERVERS BOUGHT AFTER JULY 30 2003 ENJOY !!!!! Shaun Compaq Computers 2 October 13th 03 05:16 PM


All times are GMT +1. The time now is 07:59 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.