Lock-Free Programming free programming is a technique that allows concurrent updates of shared data structures without using explicit locks. This method ensures that no threads block for arbitrarily long times.
Advantages:
- Could Can be used in places where locks must be avoided, such as interrupt handlers
- Efficiency benefits compared to lock-based algorithms for some workloads, including potential scalability benefits on multiprocessor machines
- Avoidance of priority inversion in real-time systems
...
- Requires use of special atomic processor instructions such as CAS (compare and swap) or LL/SC (load linked/store conditional)
Applications:
- “ReadRead-copy-update†update€ (RCU) in Linux 2.5 kernel
- Lock-Free Programming free programming on AMD Multi-Core Systemsmulticore systems
The ABA problem occurs during synchronization, when a location is read twice and has the same value for both reads. However, another thread has executed in- between the two reads and modified the value, did other work, then modified the value back, thus thereby fooling the first thread into thinking that "nothing has changed" even though the second thread did work violating this assumption.
The ABA problem occurs due to the occurs because of the internal reuse of nodes that have been popped off the list or by the reclamation of memory occupied by removed nodes.
Noncompliant Code Example (GNU Glib)
...
| Code Block | ||||
|---|---|---|---|---|
| ||||
#include <glib.h>
#include <glib-object.h>
struct Node {
void *data;
Node *next;
};
struct Queue {
Node *head;
Node *tail;
};
Queue* queue_new(void) {
Queue *q = g_slice_new( sizeof(Queue));
q->head = q->tail = g_slice_new0( sizeof(Node));
return q;
}
void queue_enqueue(Queue *q, gpointer data) {
Node *node;
Node *tail;
Node *next;
node = g_slice_new(Node);
node->data = data;
node->next = NULL;
while (TRUE) {
tail = q->tail;
next = tail->next;
if (tail != q->tail) {
continue;
}
if (next != NULL) {
CAS(&q->tail, tail, next);
continue;
}
if (CAS( &tail->next, null, node)) {
break;
}
}
CAS(&q->tail, tail, node);
}
gpointer queue_dequeue(Queue *q) {
Node *node;
Node *tail;
Node *next;
while (TRUE) {
head = q->head;
tail = q->tail;
next = head->next;
if (head != q->head) {
continue;
}
if (next == NULL) {
return NULL; // Empty
}
if (head == tail) {
CAS(&q->tail, tail, next);
continue;
}
data = next->data;
if (CAS(&q->head, head, next)) {
break;
}
}
g_slice_free(Node, head);
return data;
}
|
Let us assume Assume there are two threads (T1 and T2) operating simultaneously on the queue. The queue looks like this:
...
The following sequence of operation occurs.
Thread | Queue Before | Operation | Queue After |
|---|---|---|---|
| head -> A -> B -> C -> tail | Enters | head -> A -> B -> C -> tail |
| head -> A -> B -> C -> tail | Removes node A | head -> B -> C -> tail |
| head -> B -> C -> tail | Removes node B | head -> C -> tail |
| head -> C -> tail | Enqueues node A back into the queue | head -> C -> A -> tail |
| head -> C -> A -> tail | Removes node C | head -> A -> tail |
| head -> A -> tail | Enqueues a new node D | head -> A -> D -> tail |
| head -> A -> D -> tail | Thread 1 starts execution | undefined {} |
According to the above sequence of events now in this tabel, head will be pointing now point to memory which that was freed. Also, if reclaimed memory is returned to the operating system (for example, using munmap()), access to such memory locations can result in fatal access violation errors.
...
The core idea is to associate a number (typically one or two) of single-writer multi-reader multireader shared pointers, called hazard pointers.
A hazard pointer either has a null value or points to a node that may be accessed later by that thread without further validation that the reference to the node is still valid. Each hazard pointer may be written only by its owner thread , but may be read by other threads.
In this methodologysolution, communication with the associated algorithms is accomplished only through hazard pointers and a procedure RetireNode() that is called by threads to pass the addresses of retired nodes.
Pseudo CodePseudocode:
| Code Block | ||||
|---|---|---|---|---|
| ||||
// Hazard Pointers Types and StrucutreStructure structure HPRecType { HP[K]:*Nodetype; Next:*HPRecType;} // The header of the HPRec list HeadHPRec: *HPRecType; //Per-thread private variables rlist: listType; // initially empty rcount: integer; // initially 0 //The Retired Node routine RetiredNode(node:*NodeType) { rlist.push(node); rcount++; if(rcount >= R) Scan(HeadHPRec); } // The scan routine Scan(head:*HPRecType) { // Stage1: Scan HP list and insert non -null values in plist plist.init(); hprec<-head; while (hprec != null) { for (i<-0 to K-1) { hptr<-hprec^HP[i]; if (hptr!= null) plist.insert(hptr); } hprec<-hprec^Next; } // Stage 2: search plist tmplist<-rlist.popAll(); rcount<-0; node<-tmplist.pop(); while (node != null) { if (plist.lookup(node)) { rlist.push(node); rcount++; } else { PrepareForReuse(node); } node<-tmplist.pop(); } plist.free(); } |
The scan consists of two stages. The first stage involves scanning the HP list the hazard pointer list for non-null values. Whenever a non-null value is encountered, it is inserted in a local list, plist, which can be implemented as a hash table. The second stage involves checking each node in rlist against the pointers in plist. If the lookup yields no match, the node is identified to be ready for arbitrary reuse. Otherwise, it is retained in rlist until the next scan by the current thread. Insertion and lookup in plist take constant expected time. The task of the memory reclamation method is to determine when a retired node is eligible for reuse safely while allowing memory reclamation.
In the implementation, the pointer being removed is stored in the hazard pointer, preventing other threads from reusing it , thus avoiding and thereby avoiding the ABA problem.
Code:
| Code Block | ||||
|---|---|---|---|---|
| ||||
void queue_enqueue(Queue *q, gpointer data) {
Node *node;
Node *tail;
Node *next;
node = g_slice_new(Node);
node->data = data;
node->next = NULL;
while (TRUE) {
tail = q->tail;
HAZARD_SET(0, tail); // Mark tail hasas hazardous
if (tail != q->tail) { // Check tail hasn't changed
continue;
}
next = tail->next;
if (tail != q->tail) {
continue;
}
if (next != NULL) {
CAS(&q->tail, tail, next);
continue;
}
if (CAS(&tail->next, null, node) {
break;
}
}
CAS(&q->tail, tail, node);
}
gpointer queue_dequeue(Queue *q) {
Node *node;
Node *tail;
Node *next;
while (TRUE) {
head = q->head;
LF_HAZARD_SET(0, head); // Mark head as hazardous
if (head != q->head) { // Check head hasn't changed
continue;
}
tail = q->tail;
next = head->next;
LF_HAZARD_SET(1, next); // Mark next hasas hazardous
if (head != q->head) {
continue;
}
if (next == NULL) {
return NULL; // Empty
}
if (head == tail) {
CAS(&q->tail, tail, next);
continue;
}
data = next->data;
if (CAS(&q->head, head, next)) {
break;
}
}
LF_HAZARD_UNSET(head); // Retire head, and perform
// reclamation if needed.
return data;
}
|
...
When thread 1 locks on the queue to perform any operation, thread 2 cannot perform any operation on the queue, thus preventing the which prevents the ABA problem.
| Code Block | ||||
|---|---|---|---|---|
| ||||
#include <glib-object.h>
struct Node {
void *data;
Node *next;
};
struct Queue {
Node *head;
Node *tail;
mtx_t mutex;
};
Queue* queue_new(void) {
Queue *q = g_slice_new( sizeof(Queue));
q->head = q->tail = g_slice_new0( sizeof(Node));
return q;
}
void queue_enqueue(Queue *q, gpointer data) {
Node *node;
Node *tail;
Node *next;
// Lock the queue before accessing the contents
mtx_lock(&(q->mutex));
node = g_slice_new(Node);
node->data = data;
node->next = NULL;
if(q->head == NULL) {
q->head = node;
q->tail = node;
}
else {
q->tail->next = node;
q->tail = node;
}
// Unlock the mutex
mtx_unlock(&(queue->mutex));
}
gpointer queue_dequeue(Queue *q) {
Node *node;
Node *tail;
Node *next;
mtx_lock(&(q->mutex));
head = q->head;
tail = q->tail;
next = head->next;
data = next->data;
q->head = next;
g_slice_free(Node, head);
mtx_unlock(&(queue->mutex));
return data;
}
|
...
The likelihood of having a race condition is low. Once the race condition occurs, the reading memory that has already been freed can lead to abnormal program termination or unintended information disclosure.
Guideline | Severity | Likelihood | Remediation Cost | Priority | Level |
|---|---|---|---|---|---|
CON39-C | Medium | unlikely | High | P2 | L3 |
...
Sources
Michael, M.M. "Hazard pointers: Safe memory reclamation for lock-free objects,"IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 8, Aug. 2004.
AMD Developer Central: Articles & Whitepapers: Lock-Free Programming on AMD Multi-Core System
Asgher, Sarmad. Practical Lock-Free Buffers, Dr. Dobbs Go-Parallel, August 26, 2000
FireEye Anti-Malware and Anti-Botnet Security tool is an example.
...