View Single Post
  #15  
Old June 6th 04, 01:41 AM
perfnerd
external usenet poster
 
Posts: n/a
Default

(Robert Wessel) wrote in message . com...
(perfnerd) wrote in message . com...

You don't migrate from RAID5 to RAID10. The read performance for
RAID5 and RAID10 is similar since the data is automatically striped.
So, there is no need to migrate for read performance. Any time a
block gets written, you just write it in RAID10, and let the migration
algorithms handle the movement back to RAID5.



Bill and I were discussing a slightly different arrangement, where
stripes are migrated between RAID5 and RAID10 configurations as
appropriate. The earlier discussion involved a separate RAID10 area
where updates were queued until they could be migrated to main RAID5
area, presumably during a less busy time.


That was my point. When data is written, you write in in RAID10. As
the data ages and there is pressure on the disk usage then you start
migrating from RAID10 to RAID5. You do it during spare cycles. Once
in RAID5 there is no need to from RAID5 to RAID10, read performance
should be the same or better in RAID5 as the data is striped over more
spindles in RAID5.

If an existing block is re-written, you write it in RAID10 and mark
the old block as out-of-date. The invalid block is still used for
data recovery purposes, just not data access. Then you let a
housekeeping process either convert the remaining blocks in the RAID5
checksum block back to RAID10, recalculate the remaing blocks without
the updated data, or reincorporate the new block into the RAID5
checksum block. Pick your rule, or come up with heuristics to choose
one. Scheduling a conversion to RAID10 for the remaining blocks is
probably simplest as you can then let the 10to5 migration process
handle the decision of when to migrate back to RAID5.