Loading Events

Systems Seminar - CSE

Techniques for Implementing a Persistent, Deduplicating Cache in Flash Memory

Michael CondictAdvanced Technologies Group, NetApp, Inc.
SHARE:

Flash memory is becoming ubiquitous in the data center. Given its
performance characteristics, one way to take advantage of this is to
use it as a second level cache for disk I/O, between the RAM-based
cache of the storage client and the primary storage. This will reduce
the bandwidth requirements on the link between client and networked
storage, and reduce I/O latency for applications using the storage.
The cache should probably be persistent, since its large size
implies that it will take a long time to rewarm after a reboot.
When many highly similar storage clients share the same flash cache,
deduplication of cached data greatly increases the effective size
and hit rate of the cache.

One challenge in implementing a flash cache is the poor performance
of flash in random write operations. A persistent cache needs to
store its meta-data persistently and keep it consistent with the
cached data at all times, in case of a crash or power failure.
Each meta-data entry is small compared to a disk block, so it is
not practical to update a single randomly located meta-data entry
on the flash memory each time the state of the cache changes.
Furthermore, in a deduplicating cache, it is not possible to store
each meta-data entry contiguously with the associated data block
and write both atomically, because there is a variable amount of
meta-data associated with each block, and its size can change over
time as more duplication is discovered.

Finally, conventional page-replacement policies such as LRU, or
LFU would result in random writes to the cached data, as blocks are
evicted and replaced with newly cached blocks. A FIFO policy would
avoid this, but has a poor hit rate compared to others. What is
needed is a mostly sequentially page-replacement policy that still
achieves a good hit rate.

I'll discuss several techniques we've devised to work around these
problems, including new page-replacement policies, journalling of
meta-data updates, batching of updates, and the use of deduplication
fingerprints to relax consistency constraints between the data and
the meta-data, reducing the number of writes to the flash.
Michael Condict received his B.S. in Mathematics at Lehigh University
in 1976 and an M.S. in Computer Science from Cornell University
in 1982.

He worked on the first Ada compiler while a Research Scientist at New
York University, worked on circuit-design languages at Bell Labs,
Murray Hill, and spent a year on the Amoeba OS project with Andrew
Tannenbaum at the Free University in Amsterdam.

Returning to industry, he spent seven years in the Open Software
Foundation Research Institute, helping to design and build the Mach
micro-kernel-based version of OSF/1 and also OSF/AD, the version
that ran on several commercial massively parallel computing systems.

Following this he joined several early stage startups, including
InfoLibria (web caching), Oryxa (component-based storage programming)
and BladeLogic (data-center automation), the last of which went
public in the summer of 2007.

Currently he is a member of the Advanced Technologies Group at NetApp,
where his current research interests include deduplication and the
innovative use of flash technology.

Sponsored by

SSL