Home‎ > ‎Parallel Computing‎ > ‎Concurrent Skip List‎ > ‎

Node Count Maintenance

A level for a new node is usually chosen as follows. Level 0 with probability 2-1, level 1 - with probability 2-2, level 2 - with probability 2-3, and so on. Maximum possible level for a node can be either some predefined constant (for example, 32 or 64), or based on current number of nodes in the skip list N (for example, log2N). The former strategy produces worse results (unreasonably high level can be chosen accidentally), but does not require maintenance of node count. The latter strategy produces more consistent results, but does require maintenance of node count (which is problematic in a concurrent environment). I've chosen the latter strategy.

The straightforward implementation of node count maintenance is to use an atomic fetch-and-add instruction on a centralized counter variable. It destroys scalability, and is not an option.

I use the following decentralized algorithm (trading precision for decentralization, thus scalability). Each thread has a private counter, which is incremented after each insert. The counters are periodically aggregated (summed up) to produce a total node count. Period of aggregation is based on an estimation of when total node count will cross next 2^N mark. Here is the algorithm:

// get private descriptor for the current thread
thread_t* thr = get_thread_desc();
// increment local node counter
size_t const size = thr->size.load(memory_order_relaxed);
thr->size.store(size + 1, memory_order_relaxed);
// check as to whether we need to aggregate counters now
if (size >= thr->next_check)
    // iterate over all thread descriptors
    // and calculate total node count
    size_t total_size = 0;
    size_t thread_count = 0;
    thread_t* desc = thread_list_.load(memory_order_consume);
    while (desc)
        thread_count += 1;
        total_size += desc->size.load(memory_order_relaxed);
        desc = desc->next;

    // check as to whether we crossed next 2^N mark or not
    size_t new_level = level_.load(memory_order_acquire);
    if (total_size > (1ull << new_level) * density_factor)
        // if yes, update max possible node level
        unsigned long idx;
        _BitScanReverse64(&idx, total_size);
        new_level = idx + 1;
        if (new_level >= max_level_count)
            new_level = max_level_count - 1;
        size_t cmp = level_.load(memory_order_relaxed);
            if (cmp >= new_level)
                new_level = cmp;
        while (false == level_.compare_exchange_strong(cmp, new_level, memory_order_relaxed));
    // recalculate when we need to do next aggregation
    size_t const remain = (1ull << new_level) * density_factor - total_size;
    size_t const size = thr->size.load(memory_order_relaxed);
    thr->next_check = size + remain / thread_count / 2;
Move on to Memory Allocation