A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

HP EVA3000" vs IBM DS4300 Turbo



 
 
Thread Tools Display Modes
  #11  
Old March 16th 05, 07:41 PM
jlsue
external usenet poster
 
Posts: n/a
Default

On Wed, 16 Mar 2005 17:02:08 GMT, Faeandar
wrote:



BS. Unless you test a specific workload, you do not know what the
performance characteristics of that workload will be.


That was direct from the HP engineers so take it up with them. IO
patterns are only a consideration when they don't require more than an
aggregate of a single controller, usually around 80MB/sec.


Again, you are trying to take a discussion that most likely covered
your specific circumstance, and are trying to apply it fully
everywhere.

There are 4 ports, two for each controller, and each controller can
manage their own sets of LUNs. So the aggregate is much higher than
you claim.



In fact, the actual controller is not often the bottleneck as much as
the disk spindles.


You're saying that 64 drives would be a bottleneck and not the single
controller in front of them? Right.


No. I am saying that in practice, the bottleneck isn't as bad as you
might make it out to be for most people. But I'm also trying to be
very careful and limit it to this specific request, which was 2TB with
14 drives.

In my real-world EXPERIENCE, this environment will most likely not hit
any controller bottlenecks.



In practice, having lots of spindles to service an I/O in a LUN on the
EVA will alleviate more bottleneck problems that most workloads see -
in my experience.


A single LUN means a single system mounting it, generally. The shared
aspect of SAN means more than one LUN and more than system would be
accessing data behind that controller.


Both controllers are active. 2-ports each.

The assumption is that the controller is the bottleneck. Something
which is not necessarily true, and especially at the 2TB EVA3000 level
that the original poster is considering.


Again, not an assumption but a fact stated by HP. You should talk to
them more without your rose colored glasses on.
Ask the hard questions that you apparently don't want the answers to.


Your interpretation of information stated by HP, actually. I HAVE
talked to them, many times, and in the context of many different
customer environments. There is no one-size-fits-all as your
"explanation" seems to portray.

The real difference is that I not only have real-world experience in
many different environments, but that even with all that, I'm not
foolish to claim I know the issues this particular person will have.
I recognize that specific instances need a bit more investigation.


I'm not saying the EVA doesn't have a place, the virtualization
capabilities of that single controller are actually cool. But as I
said, if performance is your main concern then the EVA is not on the
short list. Not by a long shot.


Not only are you incorrect about the "single controller" issue, but
you are drawing broad-based conclusions that may not apply to specific
circumstances - especially in this case, with only 14 drives and 2TB
of storage.

I've seen fully-configured EVA5000s that were hit very heavily, and
the so-called performance issues you claim were never evident in the
customer's applications.

--- jls
The preceding message was personal opinion only.
I do not speak in any authorized capacity for anyone,
and certainly not my employer.
  #12  
Old March 17th 05, 01:48 AM
Faeandar
external usenet poster
 
Posts: n/a
Default

On Wed, 16 Mar 2005 19:41:42 GMT, jlsue
wrote:

On Wed, 16 Mar 2005 17:02:08 GMT, Faeandar
wrote:



BS. Unless you test a specific workload, you do not know what the
performance characteristics of that workload will be.


That was direct from the HP engineers so take it up with them. IO
patterns are only a consideration when they don't require more than an
aggregate of a single controller, usually around 80MB/sec.


Again, you are trying to take a discussion that most likely covered
your specific circumstance, and are trying to apply it fully
everywhere.

There are 4 ports, two for each controller, and each controller can
manage their own sets of LUNs. So the aggregate is much higher than
you claim.


Not when compared to say an HDS with up to 64 ports. Expensive yes
but we're not talking about cost, just performance. As I've said
before.




In fact, the actual controller is not often the bottleneck as much as
the disk spindles.


You're saying that 64 drives would be a bottleneck and not the single
controller in front of them? Right.


No. I am saying that in practice, the bottleneck isn't as bad as you
might make it out to be for most people. But I'm also trying to be
very careful and limit it to this specific request, which was 2TB with
14 drives.

In my real-world EXPERIENCE, this environment will most likely not hit
any controller bottlenecks.


Wrong. We are having performance problems on an HDS 9980V full loaded
with both ports and cache. It's a matter of number of hosts and IO
type. If we can drag down that type of config there is zero way a 4
port/2 controller system is going to make it. And we're not talking
OLTP or anything, just plain ol databases for business modeling. Alot
of hosts though. But that's one of the main reasons you get a SAN, to
share. Otherwise you could get by with DAS.
And this is not a spindle problem. We're taxing the fiber connections
as well as the port capacity.
FC drives can transfer data at 130 to 150MB/sec. So under even
marginal conditions 2 drives will saturate a link, let alone a single
controller.




In practice, having lots of spindles to service an I/O in a LUN on the
EVA will alleviate more bottleneck problems that most workloads see -
in my experience.


A single LUN means a single system mounting it, generally. The shared
aspect of SAN means more than one LUN and more than system would be
accessing data behind that controller.


Both controllers are active. 2-ports each.


Again, minuscule comparitively.


The assumption is that the controller is the bottleneck. Something
which is not necessarily true, and especially at the 2TB EVA3000 level
that the original poster is considering.


Again, not an assumption but a fact stated by HP. You should talk to
them more without your rose colored glasses on.
Ask the hard questions that you apparently don't want the answers to.


Your interpretation of information stated by HP, actually. I HAVE
talked to them, many times, and in the context of many different
customer environments. There is no one-size-fits-all as your
"explanation" seems to portray.


I never said there was a one-size-fits-all, I even stated the exact
opposite if you would read the post.
It's hard to misinterpret this:
ME: So all this virtualization has to happen behind one controller
correct?
HP: Yes
ME: So that controller could be a serious bottleneck on throughput and
potentially even IOPS?
HP: Yes
not alot of room for confusion there.


The real difference is that I not only have real-world experience in
many different environments, but that even with all that, I'm not
foolish to claim I know the issues this particular person will have.
I recognize that specific instances need a bit more investigation.


But you're foolish enough to claim I made statements I did not. What
a gem.
The OP asked about the EVA, I told them what I knew. The only thing I
said against it was performance lacked, and this is true.



I'm not saying the EVA doesn't have a place, the virtualization
capabilities of that single controller are actually cool. But as I
said, if performance is your main concern then the EVA is not on the
short list. Not by a long shot.


Not only are you incorrect about the "single controller" issue, but
you are drawing broad-based conclusions that may not apply to specific
circumstances - especially in this case, with only 14 drives and 2TB
of storage.


So when 2 drives saturate your single FC link to your controller, what
do you do with the other 12?


I've seen fully-configured EVA5000s that were hit very heavily, and
the so-called performance issues you claim were never evident in the
customer's applications.


Application specific is not a performance benchmark unless the app has
heavy performance requirements. "Hit very heavily" is pretty damn
vague when it comes to IO patterns.

~F
  #13  
Old March 17th 05, 03:36 PM
jlsue
external usenet poster
 
Posts: n/a
Default

On Thu, 17 Mar 2005 01:48:26 GMT, Faeandar
wrote:

On Wed, 16 Mar 2005 19:41:42 GMT, jlsue
wrote:



There are 4 ports, two for each controller, and each controller can
manage their own sets of LUNs. So the aggregate is much higher than
you claim.


Not when compared to say an HDS with up to 64 ports. Expensive yes
but we're not talking about cost, just performance. As I've said
before.


We weren't talking about comparisons. We were talking about your
response to another post, which I quote below:

*Biggest problem with the EVA line is performance. To get that cool
*virtualization you are talking about the drives all have to be
* behind
*the same controller (and it's failover partner). This means you are
*limited to the IO and bandwidth of that one controller for the
* LUN's.


My objection is that you say that it has a performance problem. No
qualificatons. No real-world experience. All based on some
theoritical ideas. These ideas may apply to specific circumstances,
but may not as well.

Oh, and again, it is incorrect to say that "the drives all have to be
behind the same controller," and being "limited to the IO and
bandwidth of that one controller for the LUNs". The implication is
that one controller at a time, and possibly one FC port, is where all
the IO goes through. I realize I may be reading too much into this,
but I feel it's important to dispel that myth.


No. I am saying that in practice, the bottleneck isn't as bad as you
might make it out to be for most people. But I'm also trying to be
very careful and limit it to this specific request, which was 2TB with
14 drives.

In my real-world EXPERIENCE, this environment will most likely not hit
any controller bottlenecks.


Wrong. We are having performance problems on an HDS 9980V full loaded
with both ports and cache. It's a matter of number of hosts and IO
type. If we can drag down that type of config there is zero way a 4
port/2 controller system is going to make it. And we're not talking
OLTP or anything, just plain ol databases for business modeling. Alot
of hosts though. But that's one of the main reasons you get a SAN, to
share. Otherwise you could get by with DAS.
And this is not a spindle problem. We're taxing the fiber connections
as well as the port capacity.
FC drives can transfer data at 130 to 150MB/sec. So under even
marginal conditions 2 drives will saturate a link, let alone a single
controller.


Again, I didn't say it would never have a performance problem, only
that it is not valid for yout to claim - as I show above - that it is
definitely a problem, implying that it *will* affect the original
poster's workload. I go on to say that in the smaller environment
which was originally discussed, the EVA will probably do very well,
based on my real-world experience.

Also, it seems you're really stretching things when you try to compare
a very expensive, very high-end disk array to the EVA line.

In reality, we have many large SAN environments utilizing EVA
controllers that perform fantastically. True, the EVA doesn't have
64 FC ports, but the newer ones have 8 at the higher end. And at the
price, having multiple arrays is not out-of-the-question, *IF* the
environment requires such a configuration.

It's a difference in philosophy. I sincerely doubt you'd be able to
get someone who is in the market for something priced in the range of
the EVA3000 interested in spending the money it takes to get a
high-end array.

Oh, and just to put things into perspective, my group - which covers
only about 7-8 states in the central region - is currently involved in
120+ installations of EVA at any one time. And many of these are
return customers (i.e., they're happy with the solution). This would
seem to indicate that the "problems" you highlight may not be as
wide-spread *in practice* as the theoretical limits would suggest.

A single LUN means a single system mounting it, generally. The shared
aspect of SAN means more than one LUN and more than system would be
accessing data behind that controller.


Both controllers are active. 2-ports each.


Again, minuscule comparitively.


Again, in practice, it works fantastically for many workloads and
environments.



The assumption is that the controller is the bottleneck. Something
which is not necessarily true, and especially at the 2TB EVA3000 level
that the original poster is considering.

Again, not an assumption but a fact stated by HP. You should talk to
them more without your rose colored glasses on.
Ask the hard questions that you apparently don't want the answers to.


Your interpretation of information stated by HP, actually. I HAVE
talked to them, many times, and in the context of many different
customer environments. There is no one-size-fits-all as your
"explanation" seems to portray.


I never said there was a one-size-fits-all, I even stated the exact
opposite if you would read the post.


I have read your post. You never qualify what you say. You state a
"performance problem" as fact. This implies a one-size-fits-all
situation. It is important to me that this be qualified a bit better.

It's hard to misinterpret this:
ME: So all this virtualization has to happen behind one controller
correct?
HP: Yes
ME: So that controller could be a serious bottleneck on throughput and
potentially even IOPS?
HP: Yes
not alot of room for confusion there.


Perhaps not, but then you've twisted the above statement that the
"controller *could* be a serious bottleneck..." (emphasis mine) to
suddenly being a point of fact that it *is* a performance bottleneck.

I refer you back to your original post. Nowhere do you qualify this.
And nowhere do you give any indication that you're speaking from
real-world testing. And nowhere do you take into account this
particular post's workload.



The real difference is that I not only have real-world experience in
many different environments, but that even with all that, I'm not
foolish to claim I know the issues this particular person will have.
I recognize that specific instances need a bit more investigation.


But you're foolish enough to claim I made statements I did not. What
a gem.
The OP asked about the EVA, I told them what I knew. The only thing I
said against it was performance lacked, and this is true.


This is not true. You can't state a fact unless you've investigated
and tested in the target environment. Your own posts demonstrate that
you're speaking completely from a theoretical standpoint, based on
your own environment.


I'm not saying the EVA doesn't have a place, the virtualization
capabilities of that single controller are actually cool. But as I
said, if performance is your main concern then the EVA is not on the
short list. Not by a long shot.


Not only are you incorrect about the "single controller" issue, but
you are drawing broad-based conclusions that may not apply to specific
circumstances - especially in this case, with only 14 drives and 2TB
of storage.


So when 2 drives saturate your single FC link to your controller, what
do you do with the other 12?


If and when that happens, we would have something to discuss and
investigate. You've got to be kidding trying to introduce a
theoretical problem into a discussion about a specific person's
request.

I reiterate - it's all about this person's environment and workload.
You don't know that any more than I do. To claim that the EVA is a
bottleneck without knowing this information is not helpful to them.



I've seen fully-configured EVA5000s that were hit very heavily, and
the so-called performance issues you claim were never evident in the
customer's applications.


Application specific is not a performance benchmark unless the app has
heavy performance requirements. "Hit very heavily" is pretty damn
vague when it comes to IO patterns.


Application-specific is THE performance benchmark that really matters.
If you only buy hardware off of industry standard benchmarks, without
ever testing in your environment, well... my experience is that places
you in the minority.

--- jls
The preceding message was personal opinion only.
I do not speak in any authorized capacity for anyone,
and certainly not my employer.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Asus motherboard turbo switch in BIOS Al Smith Homebuilt PC's 3 January 1st 05 12:49 AM
MSI KT7 Turbo + Enermax fan monitor kony General 0 September 1st 04 10:18 PM
My Review: ASPIRE Turbo Case X-Dreamer Black Mid-Tower Case with350W Power Supply, Model "ATXB3KLW/350W" Cyde Weys General 3 June 1st 04 04:10 PM
HP EVA3000 and EMC CX300 Jan?ke R?nnblom Storage & Hardrives 5 May 10th 04 12:39 PM
Turbo Mode produces Bad bios checksum Turbo1010 Asus Motherboards 2 June 30th 03 07:37 AM


All times are GMT +1. The time now is 02:12 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.