next up previous
Next: Implementation Up: Reliability Previous: Parity

Parity Caching

To avoid the additional page transfers induced by the basic parity method, we have developed a parity caching scheme which computes the parity on the client side, instead of sending the pages to a parity server. Our policy assumes that a small amount (e.g. 8) of memory frames on the client side act as a software cache for parity pages. Parity is updated in two stages:

  1. When a page is swapped in, its parity is fetched in (if not already in the client's cache) and the XOR of the page and the parity is computed and stored into the local parity frame. This operation ``removes'' the newly swapped in page from the contents of the parity block.
  2. When a page is swapped out, its parity is fetched in (if not already in the client' cache) and the XOR of the page and the parity is computed and stored into the local parity frame. This operation ``adds'' the swapped out page to the parity block.
When a server crashes, all of its pages that do not reside in the client's memory can be restored by XORing the pages in its group (that do not reside in the client's memory) with the parity page.

Compared to its naive ancestor, parity caching results in significantly fewer page transfers, and does not need to keep pages around waiting for the parity server to complete computing the new parity. Our performance measurements reported in section 5.6 show that even when only a small number of frames (8) is used from the client's memory as a cache for parity frames, parity caching results in at most 5% more page transfers than the case where no reliability policy is used.



Evangelos Markatos
Fri Mar 24 14:41:51 EET 1995