A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Using Storage Spaces with win 10



 
 
Thread Tools Display Modes
  #1  
Old March 9th 16, 09:29 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.
  #2  
Old March 10th 16, 12:36 AM posted to alt.comp.hardware.pc-homebuilt
Flasherly[_2_]
external usenet poster
 
Posts: 2,407
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 15:29:47 -0600, Charlie Hoffpauir
wrote:

I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster?


No. (Unless those other two are setup in a striped RAID;- Yes, then,
but for backups more things that can go wrong.) It's more limited to
how a drive and disc controller (on the MB) characteristics interact:
hardly faster than transfers for better USB2 in worst case scenarios,
to USB3 and blazing meltdown speeds possible;- being one of those
things that have to measured and tested uniquely for the build,
individually, than more generally indicative or expected from
published benchmarks, which may further need independent
interpretations from a manufacturer's high esteem..
  #3  
Old March 10th 16, 01:49 AM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


  #4  
Old March 10th 16, 04:07 AM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.
  #5  
Old March 10th 16, 04:37 AM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Wed, 09 Mar 2016 22:07:17 -0600, Charlie Hoffpauir
wrote:

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul


One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.


I was just thinking about how I'd "test" read speeds. I have some
fairly large sized files, one at nearly 3 GB and another just over 2
GB, but if I time the copy from one storage space volume to another,
I'd be including the write speeds as well as the read speeds, and
probably limiting the read speeds by how fast the writes would go
(since writing to 2-way mirror is considerable slower than reads).

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?
  #6  
Old March 10th 16, 05:27 AM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
On Wed, 09 Mar 2016 22:07:17 -0600, Charlie Hoffpauir
wrote:

On Wed, 09 Mar 2016 20:49:19 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.
There's a FAQ here, with some powershell commands in it.

https://blogs.msdn.microsoft.com/b8/...nd-efficiency/

I assume you'd get a bandwidth improvement from using
2-way mirror and using four disks instead of two. But
my attempts to experiment (in a virtual machine), failed.
The Storage Spaces pool formed OK, but I wasn't able
to control VirtualBox in an appropriate way to make
measurements. (I wanted to limit the bandwidth of
each virtual disk, so I could watch them "adding
together". Didn't work worth a damn.) The results
were "all over the place" and a waste of time.

And I don't have enough disks to do real, physical
experiments.

Paul

One of the MS FAQs I read talked about linear increasing read
performance as disks were added to the Simple Storage Spaces (no
mirror, basically JBOD. 2 disks 2x read speed of 1 disk, 3 disks 3x
speed, etc. But when it came to mirror, I didn't see it explained.
They went into loss of write speed (drastic loss) if parity was
included, ie 2-way mirror with parity using 3 drives.... but it just
seemed that there should be some read performance improvement if 3
columns could be set up instead of 2 with 2-way mirror. I don't think
I have room in my SATA ports to go to 4 drives, since I need an
external SATA occasionally and I have an optical drive, and an SSD for
OS and programs.

Maybe I'll just try to set up a small test using small drives and see
what I find. I have four oldish 500GB drives that I could use and see
what happens.

Thanks for the link to the PS commands.


I was just thinking about how I'd "test" read speeds. I have some
fairly large sized files, one at nearly 3 GB and another just over 2
GB, but if I time the copy from one storage space volume to another,
I'd be including the write speeds as well as the read speeds, and
probably limiting the read speeds by how fast the writes would go
(since writing to 2-way mirror is considerable slower than reads).

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?


That's what I tried too. If you declare the space to be 1TB,
then use a few small disks, HDTune tries to test the entire
1TB, much of which is "faked" and gives enormous read speed.
In my test setup, the top of the 1TB space was giving 7GB/sec.

To test using files, you set up a RAMDisk and stage the source files
on that. That's if you wanted to do a write test, and didn't
want the "source" storage device polluting the result.

I use this for a RAMDisk. The free version used to allow up
to a 4GB RAMDisk. I use the free version on this machine.
I bought the paid version for my other machine, because
it has a lot more RAM than that. And I do run a RAMDisk
over there, all the time. The paid license is per machine,
and I use the same license key in Win7/Win8/Win10 on the
other machine.

http://memory.dataram.com/products-a...ftware/ramdisk

You might also need to purge the system file cache,
by doing a large read. I think there's also some command
that will purge the cache, but I don't know if I can find
that one right now. (Actually, I was able to find a thread
that said the idea I had in mind, wouldn't work...)

OK, here's a technique.

http://www.codingjargames.com/blog/2...ws-file-cache/

fsutil file setvaliddata fileA 123454321

What that does, is simulate writing the entire file.
But in a very short time. Say the file is actually 123,454,321
bytes in size. By entering the command that way, the
file size is not modified (since the file is that
size anyway). It just causes the file to be evicted
from the system file cache (in system memory), so
that the next time you attempt to read "fileA", you
will be reading the physical device. No cheating
by pulling the data from the system file cache instead.

So the idea would be:

dd if=/dev/random of=F:\somebig.bin bs=1048576 count=1024
fsutil file setvaliddata F:\somebig.bin 1073741824

That would leave a 1GiB file on F: and clear the system
file cache. The next time I attempt to read F:\somebig.bin,
I should be benching the F: disk hard drive speed, and not
pulling the data from the system file cache.

Linux has a "dropcache" kind of command, that does
a much better job. It releases the entire cache in
one shot. And that is not a performance optimization
(to make programs go faster). It's just for cases
where you don't want any data sitting in a read file
cache, screwing up your benchmarks. (Like my test case
above that got 7GB/sec for a read speed, which of
course is impossible. Any time a result doesn't make
sense, you know the result was "pulled out of the air"
or "pulled out of RAM".)

Many enthusiast sites, when they bench, they reboot the
computer between test cases, which is thorough as an
initialization technique, but wasteful. That's another
way to purge a read cache.

Paul
  #7  
Old March 10th 16, 02:28 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Thu, 10 Mar 2016 00:27:29 -0500, Paul wrote:

I tried HDTune and it just won't work with storage spaces, at least
the free version won't. Any suggestions?


That's what I tried too. If you declare the space to be 1TB,
then use a few small disks, HDTune tries to test the entire
1TB, much of which is "faked" and gives enormous read speed.
In my test setup, the top of the 1TB space was giving 7GB/sec.

To test using files, you set up a RAMDisk and stage the source files
on that. That's if you wanted to do a write test, and didn't
want the "source" storage device polluting the result.

I use this for a RAMDisk. The free version used to allow up
to a 4GB RAMDisk. I use the free version on this machine.
I bought the paid version for my other machine, because
it has a lot more RAM than that. And I do run a RAMDisk
over there, all the time. The paid license is per machine,
and I use the same license key in Win7/Win8/Win10 on the
other machine.

http://memory.dataram.com/products-a...ftware/ramdisk

You might also need to purge the system file cache,
by doing a large read. I think there's also some command
that will purge the cache, but I don't know if I can find
that one right now. (Actually, I was able to find a thread
that said the idea I had in mind, wouldn't work...)

OK, here's a technique.

http://www.codingjargames.com/blog/2...ws-file-cache/

fsutil file setvaliddata fileA 123454321

What that does, is simulate writing the entire file.
But in a very short time. Say the file is actually 123,454,321
bytes in size. By entering the command that way, the
file size is not modified (since the file is that
size anyway). It just causes the file to be evicted
from the system file cache (in system memory), so
that the next time you attempt to read "fileA", you
will be reading the physical device. No cheating
by pulling the data from the system file cache instead.

So the idea would be:

dd if=/dev/random of=F:\somebig.bin bs=1048576 count=1024
fsutil file setvaliddata F:\somebig.bin 1073741824

That would leave a 1GiB file on F: and clear the system
file cache. The next time I attempt to read F:\somebig.bin,
I should be benching the F: disk hard drive speed, and not
pulling the data from the system file cache.

Linux has a "dropcache" kind of command, that does
a much better job. It releases the entire cache in
one shot. And that is not a performance optimization
(to make programs go faster). It's just for cases
where you don't want any data sitting in a read file
cache, screwing up your benchmarks. (Like my test case
above that got 7GB/sec for a read speed, which of
course is impossible. Any time a result doesn't make
sense, you know the result was "pulled out of the air"
or "pulled out of RAM".)

Many enthusiast sites, when they bench, they reboot the
computer between test cases, which is thorough as an
initialization technique, but wasteful. That's another
way to purge a read cache.

Paul


Since I'm not that comfortable with linux, I searched some more and
found a utility by MS for servers that "should" work.... DiskSpd.

From their info:" DiskSpd provides the functionality needed to
generate a wide variety of disk request patterns, which can be very
helpful in diagnosis and analysis of I/O performance issues with a lot
more flexibility than older benchmark tools like SQLIO. It is
extremely useful for synthetic storage subsystem testing when you want
a greater level of control than that available in CrystalDiskMark."

It's available he
https://gallery.technet.microsoft.co...orage-6cd2f223

I'll try to build my temporary multi-disk system this weekend, and if
successful, run a few speed tests. A quick look at the documentation
seems to indicate lots of flexibility in "what" you're able to test.
  #8  
Old March 12th 16, 06:02 AM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


I did a test tonight with two disks and four disks.

In 2-way mirror, four disks does not double the bandwidth.
It isn;t RAID 10.

It seems to operate like this.

Disk 0 + Disk 1 --- span
|
Mirror
|
Disk 2 + Disk 3 --- span

On my disks, the write with two disk and with four
disks, operated at the same speed. 135MB/sec or so.

Also, in my testing the "fsutil... setvaliddata" method
didn't work. So I had to resort to an old-fashioned technique
to eliminate the cache. Create an extra-large file on a
storage device not associated with the test, read it,
and make sure the extra-large file is larger than
the system file cache. (I used a 16GB file on an 8GB machine.)
This ensures that all memory of the file you just wrote
to the "Storage Space" NTFS partition is forgotten. Then,
when you copy the file off and do your read test case,
you get a pure hardware speed.

I use a RAMDisk as the second storage device, so it will
have minimal impact on the results.

I don't see a point in testing seek time, as if the seeks
hit in the cache, they might be very fast. And if the
seeks miss in the cache, the seek should be as fast as
the disk. The only thing I didn't attempt to measure,
is whether the first of the two sides of the mirror
to deliver data, gets to deliver it right away or not
(the way a traditional hardware RAID 1 would work).

Anyway, I had fun, and I won't be using Storage Spaces
for any real work. I don't have enough big disks to make
it worthwhile. I was using 4 x 500GB for this test. And
I will be restoring them from backup, to put the original
data back in place.

To delete the Storage Space, you go to Disk Management
first and delete the disk letter in there. Then, when you
use the Storage Spaces interface, do Change Settings and
click Delete, it runs without error. The only thing it
doesn't do, is it doesn't "release" the disks to disk
management, as ordinary disks. So now I have to figure out
how to fix that.

Paul
  #9  
Old March 12th 16, 03:29 PM posted to alt.comp.hardware.pc-homebuilt
Charlie Hoffpauir
external usenet poster
 
Posts: 347
Default Using Storage Spaces with win 10

On Sat, 12 Mar 2016 01:02:52 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.


I did a test tonight with two disks and four disks.

In 2-way mirror, four disks does not double the bandwidth.
It isn;t RAID 10.

It seems to operate like this.

Disk 0 + Disk 1 --- span
|
Mirror
|
Disk 2 + Disk 3 --- span

On my disks, the write with two disk and with four
disks, operated at the same speed. 135MB/sec or so.

Also, in my testing the "fsutil... setvaliddata" method
didn't work. So I had to resort to an old-fashioned technique
to eliminate the cache. Create an extra-large file on a
storage device not associated with the test, read it,
and make sure the extra-large file is larger than
the system file cache. (I used a 16GB file on an 8GB machine.)
This ensures that all memory of the file you just wrote
to the "Storage Space" NTFS partition is forgotten. Then,
when you copy the file off and do your read test case,
you get a pure hardware speed.

I use a RAMDisk as the second storage device, so it will
have minimal impact on the results.

I don't see a point in testing seek time, as if the seeks
hit in the cache, they might be very fast. And if the
seeks miss in the cache, the seek should be as fast as
the disk. The only thing I didn't attempt to measure,
is whether the first of the two sides of the mirror
to deliver data, gets to deliver it right away or not
(the way a traditional hardware RAID 1 would work).

Anyway, I had fun, and I won't be using Storage Spaces
for any real work. I don't have enough big disks to make
it worthwhile. I was using 4 x 500GB for this test. And
I will be restoring them from backup, to put the original
data back in place.

To delete the Storage Space, you go to Disk Management
first and delete the disk letter in there. Then, when you
use the Storage Spaces interface, do Change Settings and
click Delete, it runs without error. The only thing it
doesn't do, is it doesn't "release" the disks to disk
management, as ordinary disks. So now I have to figure out
how to fix that.

Paul


Many thanks Paul.... that really answers my question.

I'm now only curious why your disks aren't released after you removed
them. When I completed tests, I simply went into Storage Spaces
management and went through the removal steps there..... no
eliminating the letters at all (in fact the letters are still there
because I left two drives in place).
  #10  
Old March 12th 16, 11:09 PM posted to alt.comp.hardware.pc-homebuilt
Paul
external usenet poster
 
Posts: 13,364
Default Using Storage Spaces with win 10

Charlie Hoffpauir wrote:
On Sat, 12 Mar 2016 01:02:52 -0500, Paul wrote:

Charlie Hoffpauir wrote:
When I set up Storage Spaces in my Win 10 system for data, I chose
two-way mirror, using 2 physical drives, because I wanted "resiliency"
(I wanted data redundancy in case of a drive failure). That seems to
work fine. But now that I've learned a bit more about storage spaces,
I'm wondering if maybe "read" performance would be improved if more
than 2 physical drives were used. The thinking is that if the data
were spread across 3 instead of 2, reads might be 50% faster? Is this
the case? Is it by default, or must one configure the storage spaces
to use 3 columns instead of two at initial installation to get the
higher peformance? (I'm not referring to using 3 drives to get a
parity effect..... I don't need that big a hit on write performance)
I'm guessing that if I simply add another drive to my existing setup,
I woudln't see any increased read performance...... is that also
correct? One last question... since the Storage Spaces interface
doesn't seem to allow for specifying the number of columns and since
the articles I've read state that this must be done with powershell,
can anyone tell me the Powershell command that would do this?

I've not mentioned Simple Storage spaces, but the MS docs I've read
states that adding more drives does directly increase read
performance... but I'm not willing to give up the data redundancy.
Also, I'm not really needing 3-way mirror.

I did a test tonight with two disks and four disks.

In 2-way mirror, four disks does not double the bandwidth.
It isn;t RAID 10.

It seems to operate like this.

Disk 0 + Disk 1 --- span
|
Mirror
|
Disk 2 + Disk 3 --- span

On my disks, the write with two disk and with four
disks, operated at the same speed. 135MB/sec or so.

Also, in my testing the "fsutil... setvaliddata" method
didn't work. So I had to resort to an old-fashioned technique
to eliminate the cache. Create an extra-large file on a
storage device not associated with the test, read it,
and make sure the extra-large file is larger than
the system file cache. (I used a 16GB file on an 8GB machine.)
This ensures that all memory of the file you just wrote
to the "Storage Space" NTFS partition is forgotten. Then,
when you copy the file off and do your read test case,
you get a pure hardware speed.

I use a RAMDisk as the second storage device, so it will
have minimal impact on the results.

I don't see a point in testing seek time, as if the seeks
hit in the cache, they might be very fast. And if the
seeks miss in the cache, the seek should be as fast as
the disk. The only thing I didn't attempt to measure,
is whether the first of the two sides of the mirror
to deliver data, gets to deliver it right away or not
(the way a traditional hardware RAID 1 would work).

Anyway, I had fun, and I won't be using Storage Spaces
for any real work. I don't have enough big disks to make
it worthwhile. I was using 4 x 500GB for this test. And
I will be restoring them from backup, to put the original
data back in place.

To delete the Storage Space, you go to Disk Management
first and delete the disk letter in there. Then, when you
use the Storage Spaces interface, do Change Settings and
click Delete, it runs without error. The only thing it
doesn't do, is it doesn't "release" the disks to disk
management, as ordinary disks. So now I have to figure out
how to fix that.

Paul


Many thanks Paul.... that really answers my question.

I'm now only curious why your disks aren't released after you removed
them. When I completed tests, I simply went into Storage Spaces
management and went through the removal steps there..... no
eliminating the letters at all (in fact the letters are still there
because I left two drives in place).


I used PTEDIT32 to fix them. It's no longer offered for
download, so you'd already need to have a copy.

The partition on a Storage Spaces disk is marked "0xEE",
which as far as I know, is the "GPT marker". I changed
the partition type to "0x00", rebooted, and Disk Management
then began to show the pool disks, as ordinary disks. So
it wasn't a big deal to fix.

Initially, I wanted to use the "official" way, use
Microsoft DiskPart. But the physical disks in question,
would not show up in "list disks", so I couldn't swat at
them from there.

But an actual partition table editor, made the job easy.

After the reboot and review in Disk Management, I could
then use the disks again for other purposes.

As of this moment, all the disks have their original
content on them again. So my house-cleaning process
is done.

And now I know, if I need to do any RAID testing
some day, I *do* have the materials to do it.
Just takes a little backup and restore to free
up the resources needed.

Paul
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Compact storage server or storage tower w/ 8 Bays (Min) Gaiko Storage & Hardrives 0 January 16th 11 06:52 PM
For those storage admins who are using Hitachi Storage evlonden Storage & Hardrives 0 March 5th 09 06:13 AM
FREE STORAGE SERVICE - 100 megabytes of storage space on the internet [email protected] Storage (alternative) 0 December 14th 04 11:26 AM
Nero changing file names to upper case & no spaces . Cdr 8 April 25th 04 02:17 PM
Spaces replaced by underscores Cliff Cdr 3 November 25th 03 02:34 AM


All times are GMT +1. The time now is 08:36 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.