You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

Lock-free programming is a technique that allows concurrent updates of shared data structures without using explicit locks. This method ensures that no threads block for arbitrarily long times, and thereby boosts performance.

Advantages:

  • Can be used in places where locks must be avoided, such as interrupt handlers
  • Efficiency benefits compared to lock-based algorithms for some workloads, including potential scalability benefits on multiprocessor machines
  • Avoidance of priority inversion in real-time systems

Limitations:

  • Requires use of special atomic processor instructions such as CAS (compare and swap) or LL/SC (load linked/store conditional)

Applications:

  • Read-copy-update€ (RCU) in Linux 2.5 kernel
  • Lock-free programming on AMD multicore systems

The ABA problem occurs during synchronization, where a memory location is read twice and has the same value for both reads. However, another thread has modified the value, did other work, then modified the value back between the two reads, thereby fooling the first thread into thinking that "nothing has changed".

Noncompliant Code Example

This noncompliant code example tries to zero the maximum element of an array. We assume this may run in a multithreaded environment, and all variables are accessible to other threads.

/* Sets index to point to index of maximum element in array
   and value to contain maximum array value */
void find_max_elmenet(_Atomic int array[], size_t index, int value);

/* ... */

_Atomic int array[];
size_t index;
int value;
find_max_elmenet( array, &index, &value);
/* ... */
if (!atomic_compare_exchange_weak( array[index], value, 0)) {
  /* handle error */
}

The compare_and_swap operation will set arrayindex to 0 if and only if it is currently set to value. However, this does not necessarily zero out the maximum value of the array because:

  • index may have changed.
  • value may have changed.
  • value may no longer be the maximum value in the array.

Compliant Solution

This compliant solution uses a mutex to prevent the data from being modified during the operation

_Atomic int array[];
size_t index;
int value;
mtx_t array_mutex;
int result;
if ((result = mtx_lock(&array_mutex)) == thrd_error) {
  /* Handle error */
}
find_max_elmenet( array, &index, &value);
/* ... */
if (!atomic_compare_exchange_weak( array[index], value, 0)) {
  /* handle error */
}
if ((result = mtx_unlock(&array_mutex)) == thrd_error) {
  /* Handle error */
}

Noncompliant Code Example (GNU Glib)

This code implements a queue data structure using lock-free programming. It is implemented using glib. The function CAS() internally uses g_atomic_pointer_compare_and_exchange().

#include <glib.h>
#include <glib-object.h>

typedef struct node_s {
  void *data;
  Node *next;
} Node;

typedef struct queue_s {
  Node *head;
  Node *tail;
} Queue;

Queue* queue_new(void) {
  Queue *q = g_slice_new( sizeof(Queue));
  q->head = q->tail = g_slice_new0( sizeof(Node));
  return q;
}

void queue_enqueue(Queue *q, gpointer data) {
  Node *node;
  Node *tail;
  Node *next;

  node = g_slice_new(Node);
  node->data = data;
  node->next = NULL;
  while (TRUE) {
    tail = q->tail;
    next = tail->next;
    if (tail != q->tail) {
      continue;
    }
    if (next != NULL) {
      CAS(&q->tail, tail, next);
      continue;
    }
    if (CAS( &tail->next, null, node)) {
      break;
    }
  }
  CAS(&q->tail, tail, node);
}

gpointer queue_dequeue(Queue *q) {
  Node *node;
  Node *tail;
  Node *next;

  while (TRUE) {
    head = q->head;
    tail = q->tail;
    next = head->next;
    if (head != q->head) {
      continue;
    }
    if (next == NULL) {
      return NULL; // Empty
    }
    if (head == tail) {
      CAS(&q->tail, tail, next);
      continue;
    }
    data = next->data;
    if (CAS(&q->head, head, next)) {
      break;
    }
  }
  g_slice_free(Node, head);
  return data;
}

Assume there are two threads (T1 and T2) operating simultaneously on the queue. The queue looks like this:

head -> A -> B -> C -> tail

The following sequence of operation occurs.

Thread

Queue Before

Operation

Queue After

T1

head -> A -> B -> C -> tail

Enters queue_dequeue() function
head = A, tail = C
next = B
after executing data = next->data;
This thread gets preempted

head -> A -> B -> C -> tail

T2

head -> A -> B -> C -> tail

Removes node A

head -> B -> C -> tail

T2

head -> B -> C -> tail

Removes node B

head -> C -> tail

T2

head -> C -> tail

Enqueues node A back into the queue

head -> C -> A -> tail

T2

head -> C -> A -> tail

Removes node C

head -> A -> tail

T2

head -> A -> tail

Enqueues a new node D
After enqueue operation, thread 2 gets preempted

head -> A -> D -> tail

T1

head -> A -> D -> tail

Thread 1 starts execution
Compares the local head = q->head = A (true in this case)
Updates q->head with node B (but node B is removed)

undefined {}

According to the sequence of events in this table, head will now point to memory that was freed. Also, if reclaimed memory is returned to the operating system (for example, using munmap()), access to such memory locations can result in fatal access violation errors. The ABA problem occurred because of the internal reuse of nodes that have been popped off the list or the reclamation of memory occupied by removed nodes.

Compliant Solution (GNU Glib, Hazard Pointers)

The core idea is to associate a number (typically one or two) of single-writer multireader shared pointers, called hazard pointers.

A hazard pointer either has a null value or points to a node that may be accessed later by that thread without further validation that the reference to the node is still valid. Each hazard pointer may be written only by its owner thread but may be read by other threads.

In this solution, communication with the associated algorithms is accomplished only through hazard pointers and a procedure RetireNode() that is called by threads to pass the addresses of retired nodes.

Pseudocode:

// Hazard Pointers Types and Structure
 structure HPRecType {
   HP[K]:*Nodetype; Next:*HPRecType;}
// The header of the HPRec list
 HeadHPRec: *HPRecType;
// Per-thread private variables
 rlist: listType; // initially empty
 rcount: integer; // initially 0


//The Retired Node routine
RetiredNode(node:*NodeType) {
  rlist.push(node);
  rcount++;
  if(rcount >= R)
    Scan(HeadHPRec);
}


// The scan routine
Scan(head:*HPRecType) {
  // Stage1: Scan HP list and insert non-null values in plist
  plist.init();
  hprec<-head;
  while (hprec != null) {
    for (i<-0 to K-1) {
      hptr<-hprec^HP[i];
      if (hptr!= null)
        plist.insert(hptr);
    }
    hprec<-hprec^Next;
  }

  // Stage 2: search plist
  tmplist<-rlist.popAll();
  rcount<-0;
  node<-tmplist.pop();
  while (node != null) {
    if (plist.lookup(node)) {
      rlist.push(node);
      rcount++;
    }
    else {
      PrepareForReuse(node);
    }
    node<-tmplist.pop();
  }
  plist.free();
}

The scan consists of two stages. The first stage involves scanning the hazard pointer list for non-null values. Whenever a non-null value is encountered, it is inserted in a local list, plist, which can be implemented as a hash table. The second stage involves checking each node in rlist against the pointers in plist. If the lookup yields no match, the node is identified to be ready for arbitrary reuse. Otherwise, it is retained in rlist until the next scan by the current thread. Insertion and lookup in plist take constant expected time. The task of the memory reclamation method is to determine when a retired node is eligible for reuse safely while allowing memory reclamation.

In the implementation, the pointer being removed is stored in the hazard pointer, preventing other threads from reusing it and thereby avoiding the ABA problem.

Code:

void queue_enqueue(Queue *q, gpointer data) {
  Node *node;
  Node *tail;
  Node *next;

  node = g_slice_new(Node);
  node->data = data;
  node->next = NULL;
  while (TRUE) {
    tail = q->tail;
    HAZARD_SET(0, tail);              // Mark tail as hazardous
    if (tail != q->tail) {            // Check tail hasn't changed
      continue;
    }
    next = tail->next;
    if (tail != q->tail) {
      continue;
    }
    if (next != NULL) {
      CAS(&q->tail, tail, next);
      continue;
    }
    if (CAS(&tail->next, null, node) {
      break;
    }
  }
  CAS(&q->tail, tail, node);
}


gpointer queue_dequeue(Queue *q) {
  Node *node;
  Node *tail;
  Node *next;

  while (TRUE) {
    head = q->head;
    LF_HAZARD_SET(0, head);    // Mark head as hazardous
    if (head != q->head) {     // Check head hasn't changed
      continue;
    }
    tail = q->tail;
    next = head->next;
    LF_HAZARD_SET(1, next);    // Mark next as hazardous
    if (head != q->head) {
      continue;
    }
    if (next == NULL) {
      return NULL; // Empty
    }
    if (head == tail) {
      CAS(&q->tail, tail, next);
      continue;
    }
    data = next->data;
    if (CAS(&q->head, head, next)) {
      break;
    }
  }
  LF_HAZARD_UNSET(head);        // Retire head, and perform
                                // reclamation if needed.
  return data;
}

Compliant Solution (GNU Glib, Mutex)

In this compliant solution, mtx_lock() is used to lock the queue.

When thread 1 locks on the queue to perform any operation, thread 2 cannot perform any operation on the queue, which prevents the ABA problem.

#include <glib-object.h>

typedef struct node_s {
  void *data;
  Node *next;
} Node;

typedef struct queue_s {
  Node *head;
  Node *tail;
  mtx_t mutex;
} Queue;

Queue* queue_new(void) {
  Queue *q = g_slice_new( sizeof(Queue));
  q->head = q->tail = g_slice_new0( sizeof(Node));
  return q;
}

void queue_enqueue(Queue *q, gpointer data) {
  Node *node;
  Node *tail;
  Node *next;

  // Lock the queue before accessing the contents
  mtx_lock(&(q->mutex));

  node = g_slice_new(Node);
  node->data = data;
  node->next = NULL;

  if(q->head == NULL) {
    q->head = node;
    q->tail = node;
  }
  else {
    q->tail->next = node;
    q->tail = node;
  }
  // Unlock the mutex
  mtx_unlock(&(queue->mutex));
}

gpointer queue_dequeue(Queue *q) {
  Node *node;
  Node *tail;
  Node *next;

  mtx_lock(&(q->mutex));

  head = q->head;
  tail = q->tail;
  next = head->next;
  data = next->data;
  q->head = next;

  g_slice_free(Node, head);
  mtx_unlock(&(queue->mutex));

  return data;
}

Risk Assessment

The likelihood of having a race condition is low. Once the race condition occurs, the reading memory that has already been freed can lead to abnormal program termination or unintended information disclosure.

 

Guideline

Severity

Likelihood

Remediation Cost

Priority

Level

CON39-C

Medium

unlikely

High

P2

L3

Sources

Michael, M.M. "Hazard pointers: Safe memory reclamation for lock-free objects," IEEE Transactions on Parallel and Distributed Systems 15, 8 (2004)

AMD Developer Central: Articles & Whitepapers: Lock-Free Programming on AMD Multi-Core System

Asgher, Sarmad. Practical Lock-Free Buffers, Dr. Dobbs Go-Parallel, August 26, 2000

FireEye Anti-Malware and Anti-Botnet Security tool is an example


  • No labels