Good Software Architecture Practices with AADL

From AadlWiki

Jump to: navigation, search



The AADL model of each code pattern is available on our github examples repository here: For each pattern, we propose an AADL model that can be imported and used in OSATE as well as validation theorem to check the correct use of the pattern. The theorem are embedded in our OSATE toolset as shown below.

The following sections describe each modelling pattern.

Reference Materials

The following patterns are based on a review of existing software architecture analysis literature. We used the following reports


This pattern is very common in safety-critical systems. It consists of a component sending periodically data to another one. This can be a device (sensor) that sends data to a common data repository for making computation (such as a control law algorithm).

Patterns Details

This pattern use a single thread/decide that sends a data through a communication link. In this example, we use an AADL event port to exchange event between the component. The AADL code of this pattern can be found on various location on our example repository such as here(for the queued communication) or [here (for the thread rate).

Issues regarding period

This pattern can have an issue if the sender is executed at a higher frequency than the receiver. In that case, data can be lost and the receiver might miss some important data. In order to avoid this type of issue, we integrate a theorem in the OSATE software architecture toolset that check that the period (in millisecond) of the sender is lower than the period of the receiver. The model illustrating this issue can be found [here.

The following theorem check the consistency of communicating thread.

 theorem period_consistency
 --  If two threads are communicating using ports,
 --  the sender must have a period that is higher
 --  than the receiver so that the receiver can
 --  handle its data on time.
 foreach Conn in Connection_Set do
    foreach thr_src in {x in Thread_Set | Owner (Source(Conn)) = x} do
    foreach thr_dst in {y in Thread_Set | Owner (Destination(Conn)) = y} do
          check ( (Property (thr_src , "Period")) >= (Property (thr_dst, "Period")));

The following pictures show the use of our verification functionality included in our software architecture tool, OSATE.

Issues regarding queued communications

Another issue is related to the use of queued communication. In fact, when using queued communication, the sender can run faster than the receiver. However, in that case, one must still check that the period are consistent and the sender does not execute too quickly so that there is no more space in the queue to store new data. The validation theorem would then ensure that the queue dimension is set consistently. The following LUTE theorem (integrated in our toolset) checks that property:

 theorem queue_dimension
 --  This theorem checks the correct use
 --  of buffered communication. This is recommended
 --  and/or advised by the NASA report on Flight
 --  Software Complexity. This example show also
 --  how to check timing requirements when using buffer.
 --  For reference, see the document
 --  NASA Study on Flight Software Complexity
 --  Appendix B, Section 4 (page 6)
 foreach Conn in {c in Connection_Set | Property_Exists (Destination (c), "Queue_Size")} do
    foreach thr_src in {x in Thread_Set | Owner (Source(Conn)) = x} do 
    foreach thr_dst in {y in Thread_Set | Owner (Destination(Conn)) = y} do  
          check ( (Property (thr_src , "Period") * Property (Destination (Conn), "Queue_Size")) >= (Property (thr_dst, "Period")));

The related example can be found here.

Use of Shared Data

A share data consist mostly in a shared variable used by several tasks that either read or write it (or do both). We represent this pattern using AADL thread component accessing data components. The following pictures shows the graphical representation of such a pattern with three tasks (AADL thread component) sharing and accessing one data component.

Examples of this software architecture pattern can be found here.

Issues related to the use of shared data

When using a shared data, one want to avoid value inconsistencies and other race conditions. It means that you have to be careful and check the task that are using the shared data. To check the correct/appropriate use of shared data, our validation tool check that:

  • if more than two tasks are writing into the data, it checks that the data is associated with a locking mechanism (mutex, semaphore, etc.).
  • if only one task write into the shared data, no locking mechanism is used. Using a locking mechanism could introduce a resource consumption overhead (especially in constrained environment with limited processing resources), so, checking for potential useless resource usage could be a benefit as wll.

Variable Scope

A variable is a data located into a process, task or subprogram. When thee data is contained at a process-level, this is a global variable and any task/subprogram can access it. When located in a task (AADL thread), this is accessible only to the task and its subprogram. In the same way, a data contained in a subprogram reduces its access to the subprogram call. The variable scope impacts its visibility among the overall software architecture but also its lifespan: a data contained in a subprogram exists during the subprogram call while a data in a task is available as long as the thread is active.

Several examples of a software architecture using variable at different scope levels can be found here while validation theorems are included in OSATE.

Issues related to variable scope

Using global variables can lead to many issues:

  • Data can be accessed/modified by any part of the software
  • It makes difficult to decoupled the software and design it in a more modular fashion.
  • The variable can be re-declared inconsistently by other modules

For that reason, it might be a good strategy to analyze the use of global variable and suggest to refactor the software so that:

  • Software is decomposed into modules that can be reused and deployed on separate processing nodes
  • Variable assignment and changes are restricted to a well-defined scope ( a variable cannot be modified anywhere)
  • The software definition is clearly defined (no multiple inconsistent declaration)
  • Data flow is clearly defined and bounded to a specific scope
  • It reduces the state space and eases software validation/certification

In order to reduce the scope of the variable, our toolset analyze the software architecture and check:

  • if a data is read by several tasks and written by only one task, it suggest to moves the data to the writer tasks and add interface for the readers. It would then ease to trace the software flow
  • if a data is read by one task and written by several tasks, it suggests to move the data to the reader and add interface for the writers. As previously, it helps to understand the data flow logic
  • if a data is contains at a process level and only read/modify in a subprogram, we suggest to reduce the scope to the subprogram level. By reducing the scope of the variable, we avoid the definition of global variables and thus, reduce the state space of the program as well.
Personal tools