View Single Post
  #6  
Old July 29th 08, 08:52 PM posted to alt.comp.hardware.pc-homebuilt
VanguardLH[_2_]
external usenet poster
 
Posts: 1,453
Default just curious... solid state hard drives

Matthew wrote:

other than cost/GB are there any other problems that I should be aware of
with these new drives? I see that there are a lot of smaller memory
companies that are now turning these out in addition to the major
manufacturers with slightly lower costs like with flash memory...

Also... how do they stack up in performance compaired to say a WD 10k Raptor
drive? (I'm about to google this, but thought I'd ask in here as well)


Oxide stress for the repeated writes that eventually overcomes the
error-correcting algorithm and reserved storage for its use which causes
eventual catastrophic failure of the drive. Also, as memory blocks
become unusable, the remapping needed to use the reserve space makes the
device slower. Flash memory has 3 main failure modes that impact
reliability (and these failure modes are not independent):

- Write Endurance: How many times a cell can be written/erased before it
becomes damaged (and has to be remapped which slows access).
- Write/Program Disturb: Writes to one page can alter bits in another
page that is not being written (aka "bit flip"). The other cell is not
damaged.
- Read Disturb: Reading one page can alter bits in another page not
being read (but does not damage the cells).

Due to oxide stress and eventual failure of a cell (which takes out a
page), some SSDs use a wear leveling algorithm. Writes are distributed
across blocks within the Flash chips and use ECC so that failed cells
can be corrected when read. If ECC fails, the block is marked as
unusable and gets remapped (slower performance due to the lookup) but
obviously there is a fixed amount of reserved blocks for this remapping.
Many Flash chip manufacturers claim that write cycles per cell exceed 1
million before non-recoverable error but some tests have shown failure
after only 200,000 write/erase cycles. Reads do not cause oxide stress
so SSDs are best for data storage that is relatively static. Once the
self-healing exceeds the capacity of the drive, it fails
catastrophically and instantly hence it should be using in a recoverable
RAID setup, like RAID-5. There are tools to monitor gradual degradation
of traditional hard drives. I'm not sure if there are tools to monitor
the level of non-recoverable ECC errors, how many remaps there are, and
how fast the remaps are accruing to indicate iminent catastrophic
failure.

You haven't stated in what computing environment you intend to use SSDs.
Most likely you are considering for personal use. Well, if you have
loads of cash burning a hole in your pocket that you must get rid of
then setup a test host and see for yourself. $15/GB for *good* SSDs
(versus $0.20/GB for 7200.11 HDDs) means you'll spend a lot more, get
smaller capacity per drive, and in a host where it will only exhibit a
speed improvement only in benchmarks or in a limited number of special
or contrived scenarios.