If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#11
|
|||
|
|||
"Bill Todd" wrote in message ...
They may be smart enough to avoid a copy: if you lay out the RAID-10 material suavely (leaving an unused parity block in each copy's stripe, since space efficiency is not paramount for the temporary mirrored part of the storage) and have an explicit per-stripe map (which you probably want anyway if you're going to allow the division between RAID-10 and RAID-5 space to vary dynamically), you can just leave one copy of the data in place and only write the parity when you convert it to RAID-5. Assuming that none of the RAID-10 data is still cached when the conversion is performed, this results in N reads and one write, rather than the N reads and N+1 writes a copy would require (this can be further optimized by creating parity in memory as each RAID-10 disk segment is evicted from the cache, avoiding the reads at the cost of a single chunk of cache which itself could be staged to disk - using the 'spare' disk in the RAID-10 stripe - if space got tight; you'd have to model this to decide exactly what strategy was best). The stripe map likely is maintained in stable RAM and updated on disk only occasionally: these aren't low-end arrays. And conversion doesn't necessarily occur that often: the only reason you *ever* need to move data from RAID-10 to RAID-5 storage is because you're getting tight on overall space and need the resulting near-factor-of-2 compression - and for many update-intensive workloads the net accumulation of data is relatively slow (if the workload is not update-intensive, you might be better off just using vanilla-flavored RAID-5 anyway, fronted by a stable cache if small-chunk extension of existing data is common to consolidate it into larger writes). While that neatly avoids much of the work in the RAID10-RAID5 conversion, Aren't you screwed when you need to go the other way? Unless you save that for a out-of-line process as well, presumably driven by some accumulated usage statistics (stripe X gets heavy updates, migrate it to RAID10). Stripe level HSM, IOW? |
#12
|
|||
|
|||
"Robert Wessel" wrote in message om... .... While that neatly avoids much of the work in the RAID10-RAID5 conversion, Aren't you screwed when you need to go the other way? Unless you save that for a out-of-line process as well, presumably driven by some accumulated usage statistics (stripe X gets heavy updates, migrate it to RAID10). My (now somewhat vague) recollection is that they maintain usage stats on the stripes - that's how they determine what to move to RAID-5 when space starts to get tight. And there's at least somewhat less movement in the reverse direction: typical usage tends to be create, use heavily, use lightly, then delete (and for that matter update-in-place operations are themselves usually the exception rather than the rule, so perhaps the *most* typical usage - save for actively-updated databases - is create, then read one or more times, then delete). Stripe level HSM, IOW? Yup. - bill |
#13
|
|||
|
|||
|
#15
|
|||
|
|||
(Robert Wessel) wrote in message . com...
(perfnerd) wrote in message . com... You don't migrate from RAID5 to RAID10. The read performance for RAID5 and RAID10 is similar since the data is automatically striped. So, there is no need to migrate for read performance. Any time a block gets written, you just write it in RAID10, and let the migration algorithms handle the movement back to RAID5. Bill and I were discussing a slightly different arrangement, where stripes are migrated between RAID5 and RAID10 configurations as appropriate. The earlier discussion involved a separate RAID10 area where updates were queued until they could be migrated to main RAID5 area, presumably during a less busy time. That was my point. When data is written, you write in in RAID10. As the data ages and there is pressure on the disk usage then you start migrating from RAID10 to RAID5. You do it during spare cycles. Once in RAID5 there is no need to from RAID5 to RAID10, read performance should be the same or better in RAID5 as the data is striped over more spindles in RAID5. If an existing block is re-written, you write it in RAID10 and mark the old block as out-of-date. The invalid block is still used for data recovery purposes, just not data access. Then you let a housekeeping process either convert the remaining blocks in the RAID5 checksum block back to RAID10, recalculate the remaing blocks without the updated data, or reincorporate the new block into the RAID5 checksum block. Pick your rule, or come up with heuristics to choose one. Scheduling a conversion to RAID10 for the remaining blocks is probably simplest as you can then let the 10to5 migration process handle the decision of when to migrate back to RAID5. |
#16
|
|||
|
|||
|
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
memory too slow... | Euclid | Compaq Computers | 4 | May 10th 04 11:20 AM |
120 gb is the Largest hard drive I can put in my 4550? | David H. Lipman | Dell Computers | 65 | December 11th 03 02:51 PM |
Newbie storage questions... (RAID5, SANs, SCSI) | David Sworder | Storage & Hardrives | 17 | December 2nd 03 02:10 AM |
Check my RAM Speed | Ben Pope | Homebuilt PC's | 0 | October 24th 03 06:14 PM |
Best bang for buck CPU? | Shawk | Homebuilt PC's | 9 | October 5th 03 07:24 PM |