A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Using Storage Spaces with win 10



 
 
Thread Tools Display Modes
  #1  
Old March 9th 16, 10:29 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.
  #2  
Old March 10th 16, 01:36 AM posted to alt.comp.hardware.pc-homebuilt
Flasherly[_2_]
external usenet poster
 
Posts: 2,407
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 15:29:47 -0600, Charlie Hoffpauir
wrote:

I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster?


No. (Unless those other two are setup in a striped RAID;- Yes, then,
but for backups more things that can go wrong.) It's more limited to
how a drive and disc controller (on the MB) characteristics interact:
hardly faster than transfers for better USB2 in worst case scenarios,
to USB3 and blazing meltdown speeds possible;- being one of those
things that have to measured and tested uniquely for the build,
individually, than more generally indicative or expected from
published benchmarks, which may further need independent
interpretations from a manufacturer's high esteem..
  #3  
Old March 10th 16, 02:49 AM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


  #4  
Old March 10th 16, 05:07 AM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.
  #5  
Old March 10th 16, 05:37 AM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 22:07:17 -0600, Charlie Hoffpauir
wrote:

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.


I was just thinking about how I'd "test" read speeds. I have some
fairly large sized files, one at nearly 3 GB and another just over 2
GB, but if I time the copy from one storage space volume to another,
I'd be including the write speeds as well as the read speeds, and
probably limiting the read speeds by how fast the writes would go
(since writing to 2-way mirror is considerable slower than reads).

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?
  #6  
Old March 10th 16, 06:27 AM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
On Wed, 09 Mar 2016 22:07:17 -0600, Charlie Hoffpauir
wrote:

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.
There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul

One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.


I was just thinking about how I'd "test" read speeds. I have some
fairly large sized files, one at nearly 3 GB and another just over 2
GB, but if I time the copy from one storage space volume to another,
I'd be including the write speeds as well as the read speeds, and
probably limiting the read speeds by how fast the writes would go
(since writing to 2-way mirror is considerable slower than reads).

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?


That's what I tried too. If you declare the space to be 1TB,
then use a few small disks, HDTune tries to test the entire
1TB, much of which is "faked" and gives enormous read speed.
In my test setup, the top of the 1TB space was giving 7GB/sec.

To test using files, you set up a RAMDisk and stage the source files
on that. That's if you wanted to do a write test, and didn't
want the "source" storage device polluting the result.

I use this for a RAMDisk. The free version used to allow up
to a 4GB RAMDisk. I use the free version on this machine.
I bought the paid version for my other machine, because
it has a lot more RAM than that. And I do run a RAMDisk
over there, all the time. The paid license is per machine,
and I use the same license key in Win7/Win8/Win10 on the
other machine.

http://memory.dataram.com/products-a...ftware/ramdisk

You might also need to purge the system file cache,
by doing a large read. I think there's also some command
that will purge the cache, but I don't know if I can find
that one right now. (Actually, I was able to find a thread
that said the idea I had in mind, wouldn't work...)

OK, here's a technique.

http://www.codingjargames.com/blog/2...ws-file-cache/

fsutil file setvaliddata fileA 123454321

What that does, is simulate writing the entire file.
But in a very short time. Say the file is actually 123,454,321
bytes in size. By entering the command that way, the
file size is not modified (since the file is that
size anyway). It just causes the file to be evicted
from the system file cache (in system memory), so
that the next time you attempt to read "fileA", you
will be reading the physical device. No cheating
by pulling the data from the system file cache instead.

So the idea would be:

dd if=/dev/random of=F:\somebig.bin bs=1048576 count=1024
fsutil file setvaliddata F:\somebig.bin 1073741824

That would leave a 1GiB file on F: and clear the system
file cache. The next time I attempt to read F:\somebig.bin,
I should be benching the F: disk hard drive speed, and not
pulling the data from the system file cache.

Linux has a "dropcache" kind of command, that does
a much better job. It releases the entire cache in
one shot. And that is not a performance optimization
(to make programs go faster). It's just for cases
where you don't want any data sitting in a read file
cache, screwing up your benchmarks. (Like my test case
above that got 7GB/sec for a read speed, which of
course is impossible. Any time a result doesn't make
sense, you know the result was "pulled out of the air"
or "pulled out of RAM".)

Many enthusiast sites, when they bench, they reboot the
computer between test cases, which is thorough as an
initialization technique, but wasteful. That's another
way to purge a read cache.

Paul
  #7  
Old March 10th 16, 03:28 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Thu, 10 Mar 2016 00:27:29 -0500, Paul wrote:

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?


That's what I tried too. If you declare the space to be 1TB,
then use a few small disks, HDTune tries to test the entire
1TB, much of which is "faked" and gives enormous read speed.
In my test setup, the top of the 1TB space was giving 7GB/sec.

To test using files, you set up a RAMDisk and stage the source files
on that. That's if you wanted to do a write test, and didn't
want the "source" storage device polluting the result.

I use this for a RAMDisk. The free version used to allow up
to a 4GB RAMDisk. I use the free version on this machine.
I bought the paid version for my other machine, because
it has a lot more RAM than that. And I do run a RAMDisk
over there, all the time. The paid license is per machine,
and I use the same license key in Win7/Win8/Win10 on the
other machine.

http://memory.dataram.com/products-a...ftware/ramdisk

You might also need to purge the system file cache,
by doing a large read. I think there's also some command
that will purge the cache, but I don't know if I can find
that one right now. (Actually, I was able to find a thread
that said the idea I had in mind, wouldn't work...)

OK, here's a technique.

http://www.codingjargames.com/blog/2...ws-file-cache/

fsutil file setvaliddata fileA 123454321

What that does, is simulate writing the entire file.
But in a very short time. Say the file is actually 123,454,321
bytes in size. By entering the command that way, the
file size is not modified (since the file is that
size anyway). It just causes the file to be evicted
from the system file cache (in system memory), so
that the next time you attempt to read "fileA", you
will be reading the physical device. No cheating
by pulling the data from the system file cache instead.

So the idea would be:

dd if=/dev/random of=F:\somebig.bin bs=1048576 count=1024
fsutil file setvaliddata F:\somebig.bin 1073741824

That would leave a 1GiB file on F: and clear the system
file cache. The next time I attempt to read F:\somebig.bin,
I should be benching the F: disk hard drive speed, and not
pulling the data from the system file cache.

Linux has a "dropcache" kind of command, that does
a much better job. It releases the entire cache in
one shot. And that is not a performance optimization
(to make programs go faster). It's just for cases
where you don't want any data sitting in a read file
cache, screwing up your benchmarks. (Like my test case
above that got 7GB/sec for a read speed, which of
course is impossible. Any time a result doesn't make
sense, you know the result was "pulled out of the air"
or "pulled out of RAM".)

Many enthusiast sites, when they bench, they reboot the
computer between test cases, which is thorough as an
initialization technique, but wasteful. That's another
way to purge a read cache.

Paul


Since I'm not that comfortable with linux, I searched some more and
found a utility by MS for servers that "should" work.... DiskSpd.

From their info:" DiskSpd provides the functionality needed to
generate a wide variety of disk request patterns, which can be very
helpful in diagnosis and analysis of I/O performance issues with a lot
more flexibility than older benchmark tools like SQLIO. It is
extremely useful for synthetic storage subsystem testing when you want
a greater level of control than that available in CrystalDiskMark."

It's available he
https://gallery.technet.microsoft.co...orage-6cd2f223

I'll try to build my temporary multi-disk system this weekend, and if
successful, run a few speed tests. A quick look at the documentation
seems to indicate lots of flexibility in "what" you're able to test.
  #8  
Old March 10th 16, 05:24 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Thu, 10 Mar 2016 08:28:29 -0600, Charlie Hoffpauir
wrote:

I'll try to build my temporary multi-disk system this weekend, and if
successful, run a few speed tests. A quick look at the documentation
seems to indicate lots of flexibility in "what" you're able to test.


Well, I tried to run DiskSpd on my present build, but no joy. I got
the program to run with this command line:

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G Y:\iotest.dat
DiskSpeedResults.txt

And it runs for a while (probably about 30 sec) and no text file is
written to the disk. Looking at the disk (a 2-way mirror volume) I see
the 20 GB data file that the program creates, but no text file. If I
can't get it to run on my present system, there's no sense in building
up the temp system with more disks.
  #9  
Old March 10th 16, 09:47 PM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
On Thu, 10 Mar 2016 08:28:29 -0600, Charlie Hoffpauir
wrote:

I'll try to build my temporary multi-disk system this weekend, and if
successful, run a few speed tests. A quick look at the documentation
seems to indicate lots of flexibility in "what" you're able to test.


Well, I tried to run DiskSpd on my present build, but no joy. I got
the program to run with this command line:

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G Y:\iotest.dat
DiskSpeedResults.txt

And it runs for a while (probably about 30 sec) and no text file is
written to the disk. Looking at the disk (a 2-way mirror volume) I see
the 20 GB data file that the program creates, but no text file. If I
can't get it to run on my present system, there's no sense in building
up the temp system with more disks.


So remove the output redirection, and watch the command as it
runs in your command prompt window.

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G Y:\iotest.dat

It's possible there is a prompt waiting for you ? Like
you're supposed to give permission of the next step
in the benchmark ? Without redirection you may see that
prompt.

*******

A neat solution for that, is to get yourself a copy of tee.exe

programname inputfile | tee outputfile

What tee does, is echoes what it sees into the
console window, as well as writing it into the
outputfile.

I typically use "tee" when doing software builds.
If I'm building Firefox from source, I tee the entire
textual output from the build, into a file. As otherwise,
it scrolls off the screen.

Coreutils has a tee.exe in it, for Windows users.

http://gnuwin32.sourceforge.net/packages/coreutils.htm

But until you're confident in your new utility, try it
first with no redirection.

Paul
  #10  
Old March 10th 16, 11:24 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

.....Going with the assumption that the "meat" of the information is in
the read/write times in the last table.... I ran the diskSpd utility
on several different volumes. C, my SSD drive, M, a Simple (no mirror)
Storage Space volume, D, an ordinary Seagate 2 TB drive, and Y, a
2-way mirror Storage Space volume.

The data:

C:

%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.054 | 0.098 | 0.054
25th | 0.517 | 0.516 | 0.517
50th | 0.549 | 0.547 | 0.548
75th | 0.583 | 0.575 | 0.580
90th | 0.646 | 0.605 | 0.631
95th | 0.760 | 0.637 | 0.724
99th | 1.863 | 0.812 | 1.575
3-nines | 3.033 | 1.016 | 2.966
4-nines | 3.439 | 1.598 | 3.401
5-nines | 4.952 | 2.656 | 4.839
6-nines | 6.530 | 4.655 | 6.530
7-nines | 6.842 | 4.655 | 6.842
8-nines | 6.842 | 4.655 | 6.842
max | 6.842 | 4.655 | 6.842

D:

%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.646 | 0.305 | 0.305
25th | 57.019 | 0.999 | 22.134
50th | 117.924 | 4.017 | 86.826
75th | 198.389 | 25.069 | 168.459
90th | 324.357 | 108.540 | 291.209
95th | 447.615 | 143.787 | 414.046
99th | 616.602 | 469.677 | 600.982
3-nines | 759.427 | 747.135 | 751.097
4-nines | 837.450 | 763.727 | 837.450
5-nines | 837.450 | 763.727 | 837.450
6-nines | 837.450 | 763.727 | 837.450
7-nines | 837.450 | 763.727 | 837.450
8-nines | 837.450 | 763.727 | 837.450
max | 837.450 | 763.727 | 837.450

M:

%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.181 | 0.173 | 0.173
25th | 11.591 | 0.332 | 7.282
50th | 39.781 | 0.724 | 27.151
75th | 232.176 | 85.514 | 179.854
90th | 469.246 | 168.400 | 417.374
95th | 629.072 | 227.541 | 579.232
99th | 1685.823 | 1605.151 | 1663.511
3-nines | 1893.940 | 1866.983 | 1890.843
4-nines | 2107.262 | 1870.728 | 2107.262
5-nines | 2107.262 | 1870.728 | 2107.262
6-nines | 2107.262 | 1870.728 | 2107.262
7-nines | 2107.262 | 1870.728 | 2107.262
8-nines | 2107.262 | 1870.728 | 2107.262
max | 2107.262 | 1870.728 | 2107.262

Y:

%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.246 | 0.332 | 0.246
25th | 11.666 | 57.782 | 12.136
50th | 24.558 | 378.271 | 36.009
75th | 71.990 | 638.723 | 192.022
90th | 337.932 | 1043.038 | 655.980
95th | 686.719 | 1669.005 | 888.766
99th | 1252.589 | 2724.614 | 2215.368
3-nines | 2544.115 | 2980.434 | 2890.963
4-nines | 2976.618 | 4622.139 | 4622.139
5-nines | 2976.618 | 4622.139 | 4622.139
6-nines | 2976.618 | 4622.139 | 4622.139
7-nines | 2976.618 | 4622.139 | 4622.139
8-nines | 2976.618 | 4622.139 | 4622.139
max | 2976.618 | 4622.139 | 4622.139

This looks pretty bad for Storage Spaces performance... but I need to
run more tests to see if what I'm measuring with 25% writes is really
adversly skewing the results (since I'm primarily interested only in
read speeds).
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Compact storage server or storage tower w/ 8 Bays (Min) Gaiko Storage & Hardrives 0 January 16th 11 07:52 PM
For those storage admins who are using Hitachi Storage evlonden Storage & Hardrives 0 March 5th 09 07:13 AM
FREE STORAGE SERVICE - 100 megabytes of storage space on the internet [email protected] Storage (alternative) 0 December 14th 04 12:26 PM
Nero changing file names to upper case & no spaces . Cdr 8 April 25th 04 02:17 PM
Spaces replaced by underscores Cliff Cdr 3 November 25th 03 03:34 AM


All times are GMT +1. The time now is 02:55 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.