Home‎ > ‎Parallel Computing‎ > ‎Concurrent Skip List‎ > ‎

Lockfree Reader Pattern

Significant amount of the operations are read-only operations (attempts to insert an already present game state), so we must ensure that they are physically read-only in our implementation. A common way to achieve this is the lockfree reader pattern.

The idea of the pattern is to make a data structure consistent and available for reading all the time, regardless of the way of implementation of mutating operations. I.e. mutating operations can be synchronized with each other by means of mutexes, however each mutation operation must have a linearization point with regard to readers, after which the operation take all effects and is completely visible for all readers. This does mean that mutation operations under a mutex still must be implemented with concurrency in mind.

Here is an essence of the pattern by example of a concurrent singly-linked list (C++0x concurrency primitives are used):

struct slist_t
{
    struct node_t
    {
        std::atomic<node_t*>    next;
        void*                   data;
    };

    std::mutex                  mtx;
    std::atomic<node_t*>        head;

    void insert(node_t* node)
    {
        // no concurrent mutators for simplicity
        mtx.lock();
        // find previous and next nodes
        node_t* prev = find_prev_node(node);
        node_t* next = prev->next.load(std::memory_order_relaxed);
        node->next.store(next, std::memory_order_relaxed);
        // serialization point wrt readers
        // during this operation the list atomically proceeds
        // from one state to another
        // (to the one with the node inserted)
        prev->next.store(node, std::memory_order_release);
        mtx.unlock();
    }

    void foreach(void (*func)(node_t*))
    {
        // lockfree reading
        // contains no heavy synchronization
        // because writers take care of consistency
        node_t* node = head.load(std::memory_order_consume);
        while (node)
        {
            func(node);
            node = node->next.load(std::memory_order_consume);
        }
    }
};

The fact that there is no concurrent remove operations greatly simplifies the implementation. If there are concurrent remove operations, then some form of PDR (Partial copy-on-write Deferred Reclamation) must be employed in order to ensure that readers can read/traverse a data structure regardless of concurrently removing nodes.

Move on to Lockfree Insert Operation

Comments