Now the only remained problem is that nasty acquire fence on the fast-path of both algorithms. Acquire memory fences do not hinder scalability, however they still have some associated costs (on a par with few tens of cycles). Here I need to make a proviso - on some hardware platforms (most notably x86 and SPARC TSO) acquire fences are implicit and implied with each load, that is, they are costless no-ops. So if you are targeting only on such platforms, you are OK with the version which issues an acquire fence on fast-path. However, do not fall into the fallacy that you may remove them completely - you still need to ensure proper code generation by a compiler. There are two ways to eliminate the fence: weaken it to the consume fence, or completely eliminate it via thread-local cache trick. If there is a data dependency between the synchronization load (load of a pointer or a flag) and an associated data, then we can weaken acquire fence to consume fence. Consume fence is a costless no-op on most modern architectures (to the best of my knowledge the only architecture that requires consume fence is now almost dead DEC Alpha). Here is an example with a data-dependency (it's a union of an initialization function and surrounding code as if compiler inlines initialization function): Here is an example w/o a data-dependency: And in this case we need a fair acquire fence, otherwise a compiler or hardware can reorder it as: So, here a blocking initialization which uses emitted data-dependency to weaken acquire fence to consume fence. It can be considered virtually zero-overhead on fast-path on most modern hardware: But don't hurry, the problem is that sometimes you don't actually have a data-dependency where you may think you have a one (do not fall into the fallacy like some people do). Consider the following code: Is there a data-dependency or not? Consider what we have here in essence: See? There are no data-dependencies between the load of g_instance and loads of g_log and s_var. They are completely independent loads of global variables. So be careful with memory_order_consume, and if you are creating a general purpose library you need to either reject memory_order_consume (luckily there is another way to eliminate the acquire fence) or distinctly warn your users about restrictions. There is an interesting trick based on thread-local storage that allows to completely eliminate the acquire fence. Basically each thread just caches a pointer to the object in thread-local storage: Access to thread local storage is on par with 2 indirect loads, that is, it's very fast. However on x86/SPARC TSO version with the acquire fence is a bit faster. This version requires a thread-local storage slot and a mutex per each lazily initialized object, which may be expensive in some contexts. There is a way to eliminate this, or more precisely to amortize this, that is, there is a single slot and single mutex system-wide (for all lazily initialized objects): A single mutex can hinder scalability if there are a lot of lazily initialized objects, so this solution can be further improved by using a hash table of mutexes (and associated counters). The last thing regarding lazy initialization is that you need to consider necessity of static initialization of the primitive. If you are creating a reusable solution, then it's possible that it will be used "before main()", that is, threads are started from constructors of global objects and from initialization routines of dynamic libraries. Then you need to provide either initializing macro or a C++0x constexpr constructor. For example, pthread_once() provides it in the form of: Potential problem is that all primitives that you use must support static initialization as well. That is, you can't use, for example, CRITICAL_SECTION or std::mutex, because they does not support static initialization. However, you can use pthread_mutex_t, because it provides static initializer PTHREAD_MUTEX_INITIALIZER. Well, that's mainly all I can tell about lazy initialization. |