内存

详细描述

Classes

class   Atomic32< T >
class   Atomic64< T >
class   Atomic16< T >
class   Atomic8< T >
class   AtomicBool
class   AtomicFloatType< T >
class   AtomicPtr< T >
class   StrongReferenceCounter

Macros

#define  MemoryFenceAcquire ()
#define  MemoryFenceRelease ()
#define  MemoryFenceSequential ()

Typedefs

using  AtomicInt32 = Atomic32 < Int32 >
using  AtomicUInt32 = Atomic32 < UInt32 >
using  AtomicInt64 = Atomic64 < Int64 >
using  AtomicUInt64 = Atomic64 < UInt64 >
using  AtomicInt = Atomic64 < Int >
using  AtomicUInt = Atomic64 < UInt >
using  AtomicInt16 = Atomic16 < Int16 >
using  AtomicUInt16 = Atomic16 < UInt16 >
using  AtomicInt8 = Atomic8 < Char >
using  AtomicUInt8 = Atomic8 < UChar >
using  AtomicFloat32 = AtomicFloatType < Float32 >
using  AtomicFloat64 = AtomicFloatType < Float64 >
using  AtomicVoidPtr = AtomicPtr < void >

Macro Definition Documentation

◆  MemoryFenceAcquire

#define MemoryFenceAcquire ( )

The term load means "read access to a memory location" and a store is a "write access to a memory location". The terms acquire and release which are used to describe a type of barrier are derived from the way a mutex works: When it is acquired (locked), it ensures that the current thread will see the stores of other threads (which have released the mutex). And when a mutex is released (unlocked), it ensures that the stores of the current thread will be visible to other threads (when they acquire the mutex).

MemoryFenceAcquire() prevents the reordering of any load which precedes it in program order with any load or store which follows it in program order. Which is another way of saying it works as LoadLoad and LoadStore barrier. It is equivalent to a std::atomic_thread_fence(std::memory_order_acquire) even though the description in the C++11 standard is not that explicit.

The following example might illustrate which kind of reordering is allowed when MemoryFenceAcquire() is used:

*pa = a; [1] store a in the location pa points to b = *pb; [2] load b from the location pb MemoryFenceAcquire (); *pc = c; [3] store c in the location pc points to d = *pd; [4] load d from the location pd

The store [1] can be reordered and executed after the fence (or happen before [2]). The load [2] must be executed before the fence (but it could be reordered and happen before [1]). The store [3] must be executed after the fence (but it could be reordered and happen after [4]). The load [4] must be executed after the fence (but it could be reordered and happen before [3]).

A typical application is that once you have acquired a specific variable (a synchronization point) and it has a certain trigger value you can be sure that all the variables you are loading after the fence are valid and contain the values that have been stored before the corresponding release fence to that synchronization point in another thread.

Furthermore a load operation with acquire semantics like value.LoadAcquire() is equivalent to a relaxed value.LoadAcquire() followed by a MemoryFenceAcquire() .

◆  MemoryFenceRelease

#define MemoryFenceRelease ( )

The term load means "read access to a memory location" and a store is a "write access to a memory location". The terms acquire and release which are used to describe a type of barrier are derived from the way a mutex works: When it is acquired (locked), it ensures that the current thread will see the stores of other threads (which have released the mutex). And when a mutex is released (unlocked), it ensures that the stores of the current thread will be visible to other threads (when they acquire the mutex).

MemoryFenceRelease() prevents the reordering of any load or store which precedes it in program order with any store which follows it in program order. This means it works as LoadStore and StoreStore barrier. It is equivalent to a C++11 fence of type std::atomic_thread_fence(std::memory_order_release) even though the description in the C++11 standard is not that explicit.

The following example might illustrate which kind of reordering is allowed when MemoryFenceRelease() is used:

*pa = a; [1] store a in the location pa points to b = *pb; [2] load b from the location pb MemoryFenceRelease (); *pc = c; [3] store c in the location pc points to d = *pd; [4] load d from the location pd

The store [1] must be executed before the fence (but it could be reordered and happen before [2]). The load [2] must be executed before the fence (but it could be reordered and happen before [1]). The store [3] must be executed after the fence (but it could be reordered and happen after [4]). The load [4] can be reordered and executed before the fence

A store operation with release semantics like value.StoreRelease() is equivalent to a MemoryFenceRelease() followed by a value.StoreRelaxed(). Due to the fence preceding the store there is the following perhaps unexpected behaviour: When a StoreRelease() is followed by a StoreRelaxed() the relaxed store might be reordered and executed first (but none of the stores will jump across the fence).

◆  MemoryFenceSequential

#define MemoryFenceSequential ( )

The term load means "read access to a memory location" and a store is a "write access to a memory location". The terms acquire and release which are used to describe a type of barrier are derived from the way a mutex works: When it is acquired (locked), it ensures that the current thread will see the stores of other threads (which have released the mutex). And when a mutex is released (unlocked), it ensures that the stores of the current thread will be visible to other threads (when they acquire the mutex).

MemoryFenceSequential() prevents the reordering of any load or store which precedes it in program order with any load or store which follows it in program order. It makes sure that they are globally visible before any load or store that follows it. Besides being a LoadLoad, LoadStore and StoreStore barrier this also works as StoreLoad barrier which none of the other two fences does. It is equivalent to a std::atomic_thread_fence(memory_order_seq_cst).

Typedef Documentation

◆  AtomicInt32

using AtomicInt32 = Atomic32 < Int32 >

Atomic integer with the same size as Int32.

◆  AtomicUInt32

using AtomicUInt32 = Atomic32 < UInt32 >

Atomic unsigned integer with the same size as UInt32.

◆  AtomicInt64

using AtomicInt64 = Atomic64 < Int64 >

Atomic integer with the same size as Int64.

◆  AtomicUInt64

using AtomicUInt64 = Atomic64 < UInt64 >

Atomic unsigned integer with the same size as UInt64.

◆  AtomicInt

using AtomicInt = Atomic64 < Int >

Atomic integer with the same size as Int.

◆  AtomicUInt

using AtomicUInt = Atomic64 < UInt >

Atomic unsigned integer with the same size as UInt.

◆  AtomicInt16

using AtomicInt16 = Atomic16 < Int16 >

Atomic integer with the same size as Char.

◆  AtomicUInt16

using AtomicUInt16 = Atomic16 < UInt16 >

Atomic unsigned integer with the same size as UChar.

◆  AtomicInt8

using AtomicInt8 = Atomic8 < Char >

Atomic integer with the same size as Char.

◆  AtomicUInt8

using AtomicUInt8 = Atomic8 < UChar >

Atomic unsigned integer with the same size as UChar.

◆  AtomicFloat32

using AtomicFloat32 = AtomicFloatType < Float32 >

Atomic float with 32 bit size.

◆  AtomicFloat64

using AtomicFloat64 = AtomicFloatType < Float64 >

Atomic float with 64 bit size.

◆  AtomicVoidPtr

using AtomicVoidPtr = AtomicPtr <void>
MemoryFenceAcquire
#define MemoryFenceAcquire()
定义: atomictypes.h:56
MemoryFenceRelease
#define MemoryFenceRelease()
定义: atomictypes.h:98

Copyright  © 2014-2025 乐数软件    

工业和信息化部: 粤ICP备14079481号-1