A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Linux LVM software raid, and SAN question.



 
 
Thread Tools Display Modes
  #1  
Old January 23rd 08, 12:30 AM posted to comp.arch.storage
Kyle Schmitt
external usenet poster
 
Posts: 4
Default Linux LVM software raid, and SAN question.

In the beefy enterprise-level linux database installs, the ones where
you need massive redundancy supplied by two SAN devices, what is the
current ideal for setting them up?

Software raid to mirror the luns from both sans, and use static
partitions?
Software raid, then LVM ontop of it?
LVM to mirror and handle the whole damned thing?
PowerPath for multipathing, or using the multipathing the kernel
provides?

The particular instance I'm dealing with is a database server that
will have two HBAs, each connected to a different SAN, which is
mirrored from the linux side. While there are several ways I know how
to set it up, I'm unsure as to which will really be the best way (not
necessarily easiest) to do it.

Thanks
--Kyle
  #2  
Old January 23rd 08, 01:30 AM posted to comp.arch.storage
Cydrome Leader
external usenet poster
 
Posts: 113
Default Linux LVM software raid, and SAN question.

Kyle Schmitt wrote:
In the beefy enterprise-level linux database installs, the ones where
you need massive redundancy supplied by two SAN devices, what is the
current ideal for setting them up?


I think the linux part conflicts with "beefy", "enterprise-level" and
"massive redundancy"
  #3  
Old January 23rd 08, 05:27 AM posted to comp.arch.storage
Faeandar
external usenet poster
 
Posts: 191
Default Linux LVM software raid, and SAN question.

On Tue, 22 Jan 2008 15:30:05 -0800 (PST), Kyle Schmitt
wrote:

In the beefy enterprise-level linux database installs, the ones where
you need massive redundancy supplied by two SAN devices, what is the
current ideal for setting them up?

Software raid to mirror the luns from both sans, and use static
partitions?
Software raid, then LVM ontop of it?
LVM to mirror and handle the whole damned thing?
PowerPath for multipathing, or using the multipathing the kernel
provides?

The particular instance I'm dealing with is a database server that
will have two HBAs, each connected to a different SAN, which is
mirrored from the linux side. While there are several ways I know how
to set it up, I'm unsure as to which will really be the best way (not
necessarily easiest) to do it.

Thanks
--Kyle



Someone posted the inherent flaws in your statement about beefy and
such so I'll skip that (but they're right you know...).

What you're asking about is two fabrics. One hba goes to a switch
which goes to a storage port on the array. The other hba goes to a
different switch and a storage port on the same array as the other
switch.
Nothing ties these two switches together.

This is fairly common practice when availability is a top priority.

Forget LVM based raid, it's almost pointless when the hardware based
raid is in the array (I assume you have an array though you do not
state that explicitly).
However make sure the LVM is in place. Such things make backend
migrations much simpler.

Do not cross the streams Ray....
Do not create any connection betwee the two fabrics. This allows for
independent maintenance to each fabric with zero impact on the
application.

As for multi-pathing, I thought MPIO was suported on Linux. If so,
use it. It works well and is free. Avoid PowerPath like the plague.
I'm not even sure they sell it anymore. I think Hitachi has EOL'd
their version of pay-for multipathing as well.

Hmm, perhaps I'm missing a piece here. Are you asking about having
two completely seperate SANs and not just fabrics? So two arrays as
well? If so, that's less common but not unheard of.
Same practice applies only you would use LVM to mirror. Or at least I
would.

~F
  #4  
Old January 23rd 08, 04:05 PM posted to comp.arch.storage
Kyle Schmitt
external usenet poster
 
Posts: 4
Default Linux LVM software raid, and SAN question.

On Jan 22, 10:27*pm, Faeandar wrote:
On Tue, 22 Jan 2008 15:30:05 -0800 (PST), Kyle Schmitt



wrote:
In the beefy enterprise-level linux database installs, the ones where
you need massive redundancy supplied by two SAN devices, what is the
current ideal for setting them up?


Software raid to mirror the luns from both sans, and use static
partitions?
Software raid, then LVM ontop of it?
LVM to mirror and handle the whole damned thing?
PowerPath for multipathing, or using the multipathing the kernel
provides?


The particular instance I'm dealing with is a database server that
will have two HBAs, each connected to a different SAN, which is
mirrored from the linux side. *While there are several ways I know how
to set it up, I'm unsure as to which will really be the best way (not
necessarily easiest) to do it.


Thanks
--Kyle


Someone posted the inherent flaws in your statement about beefy and
such so I'll skip that (but they're right you know...).

What you're asking about is two fabrics. *One hba goes to a switch
which goes to a storage port on the array. *The other hba goes to a
different switch and a storage port on the same array as the other
switch.
Nothing ties these two switches together.

This is fairly common practice when availability is a top priority.

Forget LVM based raid, it's almost pointless when the hardware based
raid is in the array (I assume you have an array though you do not
state that explicitly).
However make sure the LVM is in place. *Such things make backend
migrations much simpler.

Do not cross the streams Ray....
Do not create any connection betwee the two fabrics. *This allows for
independent maintenance to each fabric with zero impact on the
application.

As for multi-pathing, I thought MPIO was suported on Linux. *If so,
use it. *It works well and is free. *Avoid PowerPath like the plague.
I'm not even sure they sell it anymore. *I think Hitachi has EOL'd
their version of pay-for multipathing as well.

Hmm, perhaps I'm missing a piece here. *Are you asking about having
two completely seperate SANs and not just fabrics? *So two arrays as
well? *If so, that's less common but not unheard of.
Same practice applies only you would use LVM to mirror. *Or at least I
would.

~F



I probably didn't state it clearly enough then (and lets avoid the
flame war that Linux vs Solaris vs AIX for enterprise level databases
goes, I can only assume neither of you were silly enough to suggest
windows should ever be used

The switches won't be attached in any direct way, wasn't suggesting
that.

The box has two HBAs:
HBA1 attaches to Switch1, which attaches n* ways SAN1 (which is floor
standing cabinet full of discs, using hardware raid of course).
HBA2 attaches to Switch2, which attaches n ways to SAN2.
*I think n is 4 in this case

We're dealing with two physical sans.

The idea is the box will handle the mirroring of Disk1 (mounted on
SAN1 via HBA1), onto Disk2 (mounted on SAN2 via HBA2).

Does that make more sense pertaining to: software raid 1 vs lvm
mirroring.

And I'll look into the builtin MPIO (it exists and is considered
stable IIRC, though I've never used it). I didn't realize powerpath
raised such ire in people. Is it really that bad? Using it for the
past half day it hasn't left a good impression so far, that's for
sure.

Thanks,
Kyle
  #5  
Old January 23rd 08, 05:11 PM posted to comp.arch.storage
belpatCA
external usenet poster
 
Posts: 9
Default Linux LVM software raid, and SAN question.

On Jan 23, 8:05 am, Kyle Schmitt wrote:
The box has two HBAs:
HBA1 attaches to Switch1, which attaches n* ways SAN1 (which is floor
standing cabinet full of discs, using hardware raid of course).
HBA2 attaches to Switch2, which attaches n ways to SAN2.
*I think n is 4 in this case

We're dealing with two physical sans.

The idea is the box will handle the mirroring of Disk1 (mounted on
SAN1 via HBA1), onto Disk2 (mounted on SAN2 via HBA2).

Does that make more sense pertaining to: software raid 1 vs lvm
mirroring.


I don't want to suggest a bad architecture here (I'm sure there are
political/historical reasons), but I'm curious as to why you would
want software mirroring between two raid boxes in different SANs.
Aiming for 'massive redundancy' by relying on software on a server to
copy every byte to two different arrays isn't exactly common practice,
at least not in my world...
Just blindly mirroring between the two arrays only gives you some
redundancy against hardware failure (while introducing possibility of
software failure on your server), but doesn't help at all for logical
or user errors, which are much more common. And the most common
hardware failures (disk crash, controller crash, cable failure) are
presumably already taken care of by the arrays, unless you've got a
crappy one.

Something more common, as suggested by earlier posts, would be to have
one SAN with two separate fabrics, and every end-device (HBA and both
storage arrays) connected to both fabrics, switches only in one
fabric. Then use one array for primary storage, with dual redundant
everything from the HBA down to the array, and use the other array for
snapshots or clones.
You can have those snapshots generated by your apps or by volume
managers on the server if you like. Or some storage arrays support
creating 'remote' snapshots.

  #6  
Old January 23rd 08, 09:25 PM posted to comp.arch.storage
Kyle Schmitt
external usenet poster
 
Posts: 4
Default Linux LVM software raid, and SAN question.

On Jan 23, 10:11*am, belpatCA wrote:
I don't want to suggest a bad architecture here (I'm sure there are
political/historical reasons), but I'm curious as to why you would
want software mirroring between two raid boxes in different SANs.
Aiming for 'massive redundancy' by relying on software on a server to
copy every byte to two different arrays isn't exactly common practice,
at least not in my world...
Just blindly mirroring between the two arrays only gives you some
redundancy against hardware failure (while introducing possibility of
software failure on your server), but doesn't help at all for logical
or user errors, which are much more common. And the most common
hardware failures (disk crash, controller crash, cable failure) are
presumably already taken care of by the arrays, unless you've got a
crappy one.

Something more common, as suggested by earlier posts, would be to have
one SAN with two separate fabrics, and every end-device (HBA and both
storage arrays) connected to both fabrics, switches only in one
fabric. Then use one array for primary storage, with dual redundant
everything from the HBA down to the array, and use the other array for
snapshots or clones.
You can have those snapshots generated by your apps or by volume
managers on the server if you like. Or some storage arrays support
creating 'remote' snapshots.


Each time I post I realize I forgot some piece of data or another
In my description of one hba hooking up to a switch, which has four
connections to the SAN, I failed to mention that the 4 connections
consist of two fabrics.
The consensus here has been that in the final setup, there will be two
habs to each switch (total of four on the box), for redundancy.

I'm not sure about remote snapshots, but it's possible that our SAN
supports it, or that the package/upgrade for that is available. I'll
have to look into it.

Historical-political issues abound, far too much to bring up in a
post. Suffice it to say, there are reasons we need and want two
physical sans hooked up here, and the majority are even legitimate.

Thanks,
Kyle
  #7  
Old January 23rd 08, 11:02 PM posted to comp.arch.storage
Cydrome Leader
external usenet poster
 
Posts: 113
Default Linux LVM software raid, and SAN question.

Kyle Schmitt wrote:
On Jan 23, 10:11?am, belpatCA wrote:
I don't want to suggest a bad architecture here (I'm sure there are
political/historical reasons), but I'm curious as to why you would
want software mirroring between two raid boxes in different SANs.
Aiming for 'massive redundancy' by relying on software on a server to
copy every byte to two different arrays isn't exactly common practice,
at least not in my world...
Just blindly mirroring between the two arrays only gives you some
redundancy against hardware failure (while introducing possibility of
software failure on your server), but doesn't help at all for logical
or user errors, which are much more common. And the most common
hardware failures (disk crash, controller crash, cable failure) are
presumably already taken care of by the arrays, unless you've got a
crappy one.

Something more common, as suggested by earlier posts, would be to have
one SAN with two separate fabrics, and every end-device (HBA and both
storage arrays) connected to both fabrics, switches only in one
fabric. Then use one array for primary storage, with dual redundant
everything from the HBA down to the array, and use the other array for
snapshots or clones.
You can have those snapshots generated by your apps or by volume
managers on the server if you like. Or some storage arrays support
creating 'remote' snapshots.


Each time I post I realize I forgot some piece of data or another
In my description of one hba hooking up to a switch, which has four
connections to the SAN, I failed to mention that the 4 connections
consist of two fabrics.
The consensus here has been that in the final setup, there will be two
habs to each switch (total of four on the box), for redundancy.

I'm not sure about remote snapshots, but it's possible that our SAN
supports it, or that the package/upgrade for that is available. I'll
have to look into it.

Historical-political issues abound, far too much to bring up in a
post. Suffice it to say, there are reasons we need and want two
physical sans hooked up here, and the majority are even legitimate.


The main objection you're hearing here is that a database server should be
also double as a SAN mirroring device.

Say the machine crashes. Which san even has the correct data anymore?

There are (costly) hardware devices which will present a a set of LUNs to
your host, the database machines that do the mirroring to the separate
SANs on the backend.

Assuming this is what you really want, you end up with a database server
doing only one thing, and that's being a database server, and you have
some extra hardware dedicated to mirroring your SANs.

Will all this extra stuff increate reliability? I really don't know.
  #8  
Old January 24th 08, 12:36 AM posted to comp.arch.storage
Kyle Schmitt
external usenet poster
 
Posts: 4
Default Linux LVM software raid, and SAN question.

On Jan 23, 4:02*pm, Cydrome Leader wrote:
The main objection you're hearing here is that a database server should be
also double as a SAN mirroring device.

Say the machine crashes. Which san even has the correct data anymore?

There are (costly) hardware devices which will present a a set of LUNs to
your host, the database machines that do the mirroring to the separate
SANs on the backend.

Assuming this is what you really want, you end up with a database server
doing only one thing, and that's *being a database server, and you have
some extra hardware dedicated to mirroring your SANs.

Will all this extra stuff increate reliability? I really don't know.


This is work, want to do and will do may be different animals.
Add that to the fact that the server rooms are located in the midst of
a large industrial/factory building, and other issues than just
standard it infrastructure crop up here.

So it appears to be that it's not standard having the db server handle
the mirroring across the sans.
OK, fair enough.
But, to get to the core of it... in linux would this best be done via
LVM mirroring, or software raid (level 1)?

Thanks,
Kyle


PS: Doing this in linux (mirroring the sans via lvm/software raid), is
funny enough, a mirror of how the previous system did it in AIX (using
LVM).
  #9  
Old January 24th 08, 12:56 AM posted to comp.arch.storage
lahuman9
external usenet poster
 
Posts: 3
Default Linux LVM software raid, and SAN question.

On Jan 22, 6:30*pm, Kyle Schmitt wrote:
In the beefy enterprise-level linux database installs, the ones where
you need massive redundancy supplied by two SAN devices, what is the
current ideal for setting them up?

Software raid to mirror the luns from both sans, and use static
partitions?
Software raid, then LVM ontop of it?
LVM to mirror and handle the whole damned thing?
PowerPath for multipathing, or using the multipathing the kernel
provides?

The particular instance I'm dealing with is a database server that
will have two HBAs, each connected to a different SAN, which is
mirrored from the linux side. *While there are several ways I know how
to set it up, I'm unsure as to which will really be the best way (not
necessarily easiest) to do it.

Thanks
--Kyle


Seems a silly way to pinch some pennies - you've already bought two
SAN storage
devices. Why not let the SAN storage devices mirror between each
other (pref
over the SAN), set up a second identical linux server and present both
the primary
and mirrored LUNS to both hosts over both fabrics? The second linux
server and
the second san storage device would be passive in this configuration
and as long
as you don't try to import or mount the san partitions on the second
host while the
first host has them you should be fine. i'd def use vxvm in this
config to manage the
luns which will help prevent that situation. it's also easier to
recover from errors than
ext3 or whatever linux is using now

I can't imagine relying on software mirroring for this configuration,
especially in linux,
especially for databases. If the san devices are in different
locations put the second
linux box in the second location. throw linux out and replace it with
something better
first chance you get. You are relying on mirroring that is using
device drivers written
by random collections of geeks across the world.
  #10  
Old January 24th 08, 04:54 PM posted to comp.arch.storage
belpatCA
external usenet poster
 
Posts: 9
Default Linux LVM software raid, and SAN question.

On Jan 23, 4:36 pm, Kyle Schmitt wrote:
....
So it appears to be that it's not standard having the db server handle
the mirroring across the sans.
OK, fair enough.
But, to get to the core of it... in linux would this best be done via
LVM mirroring, or software raid (level 1)?


Since you seem to be stuck with this architecture and these are your
only two options, I would use LVM.
The software raid stuff has really been designed to provide raid
capabilities for people who couldn't afford a raid controller or
storage array.
It likes to think of storage as disks, and it's only goal in life is
to protect against catastrophic disk failure.
You on the other hand, have luns coming from storage arrays via HBAs
over two dual-redundant SANs. I'd like to think you certainly
don't have to worry about disk failures anymore, since there shouldn't
be any single point of failure left, other than perhaps power outage
taking out an entire array.

At least with lvm, you get to attach and detach the luns from the
second array as you see fit (for example if you want to take down the
2nd array for maintenance)
And with lvm, maybe you can even wow your boss by making a snapshot on
those luns, providing some logical snapshot taken at strategical
points in time, rather than a blind mirror.
Note that without using enterprise-level software (read: $$$) which
would allow you to integrate the snapshot into the database, you'll
need to manually coordinate flushing and freezing the db
before taking a snapshot at the volume manager level.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Another question on RAID (software) Carlos Moreno Storage (alternative) 10 April 9th 05 03:50 AM
Newbie Question re hardware vs software RAID Gilgamesh General 44 November 22nd 04 11:52 PM
A7V600 Raid (linux?) question tamer kavlak Asus Motherboards 0 August 9th 04 12:41 PM
Any P4P800-E Deluxe PATA RAID vs. XP software RAID benchmarks? Shawn Barnhart Asus Motherboards 0 July 21st 04 05:14 PM
Redhat Linux on A7V333 - Software RAID problem Steve Asus Motherboards 0 February 27th 04 05:40 PM


All times are GMT +1. The time now is 08:52 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.