52938.fb2
Many activities need to be completed when designing applications for real-time systems. One group of activities requires identifying certain elements. Some of the more important elements to identify include:
1. system requirements,
2. inputs and outputs,
3. real-time deadlines,
4. events and event response times,
5. event arrival patterns and frequencies,
6. required objects and other components,
7. tasks that need to be concurrent,
8. system schedulability, and
9. useful or needed synchronization protocols for inter-task communications.
Depending on the design methodologies and modeling tools that a design team is using, the list of steps to be taken can vary, as well as the execution order. Regardless of the methodology, eventually a design team must consider how to decompose the application into concurrent tasks (Step 7).
This chapter provides guidelines and discussions on how real-time embedded applications can be decomposed. Many design teams use formalized object-oriented development techniques and modeling languages, such as UML, to model their real-time systems initially. The concepts discussed in this section are complementary to object-oriented design approaches; much emphasis is placed on decomposing the application into separate tasks to achieve concurrency. Through examples, approaches to decomposing applications into concurrent tasks are discussed. In addition, general guidelines for designing concurrency in a real-time application are provided.
These guidelines and recommendations are based on a combination of things-lessons learned from current engineering design practices, work done by H. Gomaa, current UML modeling approaches, and work done by other researchers in the real-time field. Our guidelines provide high-level strategies on proceeding with decomposing real-time applications for concurrency. Our recommendations, on the other hand, are specific strategies focusing on the implementation of concurrency. Both the guidelines and recommendations might not necessarily cover every exception that can arise when designing a real-time embedded application. If two guidelines or recommendations appear to contain opposing thoughts, they should be treated as constituting a tradeoff that the designer needs to consider.
At the completion of the application decomposition process, robust systems must validate the schedulability of the newly formed tasks. Quantitative schedulability analysis on a real-time system determines whether the system as designed is schedulable. A real-time system is considered schedulable if every task in the system can meet its deadline.
This chapter also focuses on the schedulability analysis (Step 8). In particular, the chapter introduces a formal method known as Rate Monotonic Analysis (RMA).
In most cases, designers insist on a set of requirements before beginning work on a real-time embedded system. If the requirements are not fully defined, one of the first activities is to ensure that many of these requirements are solidified. Ambiguous areas also need to be fleshed out. The detailed requirements should be captured in a document, such as a Software Requirements Specification (SRS). Only then can an engineering team make a reasonable attempt at designing a system. A high-level example of a mobile phone design is provided to show how to decompose an application into concurrent units of execution.
Commonly, decomposing an application is performed using an outside-in approach. This approach follows a process of identifying the inputs and outputs of a system and expressing them in a simple high-level context diagram. A context diagram for the mobile application is illustrated in Figure 14.1. The circle in the center of the diagram represents the software application. Rectangular boxes represent the input and output devices for this application. In addition, arrows, labeled with meaningful names, represent the flow of the input and output communications. For the sake of simplicity, not all components (i.e., battery, input for hands-free ear plug, input for external power, and power on/off button) are illustrated.
Figure 14.1: High-level context diagram of a mobile handheld unit.
The diagram shows that mobile handset application provides interfaces for the following I/O devices:
· antenna,
· speaker,
· volume control,
· keypad,
· microphone, and
· LCD.
The following inputs are identified:
· RF input,
· volume input,
· keypad input, and
· microphone input.
The following outputs are identified:
· RF output,
· speaker output, and
· LCD output.
After the inputs and outputs are identified, a first cut at decomposing the application can be made. Figure 14.2 shows an expanded diagram of the circle identifying some of the potential tasks into which the application can decompose. These tasks are along the edges of the newly drawn application, which means they probably must interact with the outside world. Note that these tasks are not the only ones required, but the process provides a good starting point. Upon further analysis, additional tasks may be identified, or existing tasks may be combined as more details are considered.
Figure 14.2: Using the outside-in approach to decompose an application into tasks.
Some inputs and outputs in a handheld mobile device can require more than one dedicated task to handle processing. Conversely, in some cases, a single task can handle multiple devices. Looking at the example, the antenna can have two tasks assigned to it-one for handling the incoming voice channel and one for handling the outgoing voice channel. Printing to the LCD can be a relatively simple activity and can be handled with one task. Similarly, sampling the input voice from the microphone can also be handled with one task for now but might require another task if heavy computation is required for sampling accuracy. Note that one task can handle the input keys and the volume control. Finally, a task is designated for sending the output to the speaker.
This example illustrates why the decomposition method is called outside-in: an engineering team can continue this way to decompose the overall application into tasks from the outside in.
The outside-in approach to decomposing an application is an example of one practical way to identify types of concurrent tasks that are dependent on or interact with I/O devices. The mobile handset example expands a high-level context diagram to determine some of the obvious tasks required to handle certain events or actions. Further refinement of this diagram would yield additional tasks. More formalized ways of identifying concurrency exist, however. Many guidelines are provided in this section to help the reader identify concurrency in an application. First, let's introduce a couple of concepts that are important to understanding concurrency.
It is important to encapsulate concurrency within an application into manageable units. A unit of concurrency can be a task or a process; it can be any schedulable thread of execution that can compete for the CPU's processing time. Although ISRs are not scheduled to run concurrently with other routines, they should also be considered in designing for concurrency because they follow a preemptive policy and are units of execution competing for CPU processing time. The primary objective of this decomposition process is to optimize parallel execution to maximize a real-time application's performance and responsiveness. If done correctly, the result can be a system that meets all of its deadlines robustly and responsively. If done incorrectly, real-time deadlines can be compromised, and the system's design may not be acceptable.
Concurrent tasks in a real-time application can be scheduled to run on a single processor or multiple processors. Single-processor systems can achieve pseudo concurrent execution, in which an application is decomposed into multiple tasks maximizing the use of a single CPU. It is important to note that on a single-CPU system, only one program counter (also called an instruction pointer) is used, and, hence, only one instruction can be executed at any time. Most applications in this environment use an underlying scheduler's multitasking capabilities to interleave the execution of multiple tasks; therefore, the term pseudo concurrent execution is used.
In contrast, true concurrent execution can be achieved when multiple CPUs are used in the designs of real-time embedded systems. For example, if two CPUs are used in a system, two concurrent tasks can execute in parallel at one time, as shown in Figure 14.3. This parallelism is possible because two program counters (one for each CPU) are used, which allows for two different instructions to execute simultaneously.
Figure 14.3: Pseudo and true concurrent (parallel) execution.
In the case of multiple CPU systems, the underlying RTOS typically is distributed, which means that various components, or copies of RTOS components, can execute on different CPUs. On such systems, multiple tasks can be assigned to run on each CPU, just as they do on single-CPU systems. In this case, even though two or more CPUs allow true concurrent execution, each CPU might actually be executing in a pseudo-concurrent fashion.
Unless explicitly stated, this book refers to both pseudo and true parallel execution as concurrent execution for the sake of simplicity.
Following the outside-in approach, certain types of tasks can be identified near the application edge (i.e., where an application needs to create an interface with an I/O device), whereas other tasks can be internal to the application. From the mobile handheld example, if a design team were to further decompose the application, these internal tasks would be identified. Applications, such as calculator or calendar programs, are some examples of internal tasks or groupings of tasks that can exist within the overall handheld mobile application. These internal tasks are decoupled from the I/O devices; they need no device-specific information in order to run
Guideline 1: Identify Device Dependencies
· Guideline 1a: Identify Active I/O Devices
· Guideline 1b: Identify Passive I/O Devices
Guideline 2: Identify Event Dependencies
Guideline 3: Identify Time Dependencies
· Guideline 3a: Identify Critical and Urgent Activities
· Guideline 3b: Identify Different Periodic Execution Rates
· Guideline 3c: Identify Temporal Cohesion
Guideline 4: Identify Computationally Bound Activities
Guideline 5: Identify Functional Cohesion
Guideline 6: Identify Tasks that Serve Specific Purposes
Guideline 7: Identify Sequential Cohesion
All real-time systems interface with the physical world through some devices, such as sensors, actuators, keyboards, or displays. An application can have a number of I/O devices interfacing to it. Not all devices, however, act as both input and output devices. Some devices can act just as inputs or just as outputs. Other devices can act as both. The discussions in this book refer to all of these devices as I/O devices.
The outside-in approach focuses on looking at the I/O devices in a system and assigning a task to each device. The basic concept is that unsynchronized devices need separate handling. For simple device interactions, processing within an ISR may suffice; however, for additional device processing, a separate task or set of tasks may be assigned. Both active and passive I/O devices should be considered for identifying potential areas of an application that can be decomposed into concurrent tasks.
As shown in Figure 14.4, hardware I/O devices can be categorized as two types:
· Active I/O devices
· Passive I/O devices
Figure 14.4: Some general properties of active and passive devices.
Active I/O devices generate interrupts to communicate with an application. These devices can generate interrupts in a periodic fashion or in synch with other active devices. These devices are referred to in this book as synchronous. Active devices can also generate interrupts aperiodically, or asynchronously, with respect to other devices. These devices are referred to in this book as asynchronous .
Passive I/O devices do not generate interrupts. Therefore, the application must initiate communications with a passive I/O device. Applications can communicate with passive devices in a periodic or aperiodic fashion.
Active devices generate interrupts whether they are sending input to or receiving output from the CPU. Active input devices send an interrupt to the CPU when the device has new input ready to be processed. The new input can be a large buffer of data, a small unit of data, or even no data at all. An example of the latter is a sensor that generates an interrupt every time it detects some event. On the other hand, an active output device sends an interrupt to the CPU when the device has finished delivering the previous output from the CPU to the physical world. This interrupt announces to the CPU and the application that the output device has completed the last request and is ready to handle the next request.
Passive input or output devices require the application to generate the necessary requests in order to interact with them. Passive input devices produce an input only when the application requests. The application can make these requests either periodically or aperiodically. In the case of the former, the application runs in a periodic loop and makes a request every time through the loop, called polling a device. For aperiodic requests, the application makes the request only when it needs the data, based on an event asynchronous to the application itself, such as an interrupt from another device or a message from another executing task.
Special care must be taken when polling a passive input device, especially when sampling a signal that has sharp valleys or peaks. If the polling frequency is too low, a chance exists that a valley or peak might be missed. If the polling frequency is too high, extra performance overhead might be incurred that uses unnecessary CPU cycles.
Active input or output I/O devices use interrupts to communicate with real-time applications. Every time an active input device needs to send data or notification of an event to a real-time application, the device generates an interrupt. The interrupt triggers an ISR that executes the minimum code needed to handle the input. If a lot of processing is required, the ISR usually hands off the process to an associated task through an inter-task communication mechanism.
Similarly, active output devices also generate interrupts when they need to communicate with applications. However, interrupts from active output devices are generated when they are ready to receive the next piece of data or notification of some event from the application. The interrupts trigger the appropriate ISR that hands off the required processing to an associated task using an inter-task communication mechanism.
The diagram for both an active I/O device acting as an input or an output to an application and for a device generating interrupts in a synchronous or asynchronous manner is similar to the one illustrated in Figure 14.5.
Figure 14.5: General communication mechanisms for active I/O devices.
Some typical tasks that can result from identifying an active I/O device in a real-time application are listed in Table 14.1.
Table 14.1: Common tasks that interface with active I/O devices.
Task Type | Description |
---|---|
Asynchronous Active Device I/O Task | Assigned to active I/O devices that generate aperiodic interrupts or whose operation is asynchronous with respect to other I/O devices. |
Synchronous Active Device I/O Task | Assigned to active I/O devices that generate periodic interrupts or whose operation is synchronous with respect to other I/O devices. |
Resource Control Device I/O Task | Assigned for controlling the access to a shared I/O device or a group of devices. |
Event Dispatch Device I/O Task | Assigned for dispatching events to other tasks from one or more I/O devices. |
Recommendation 1: Assign separate tasks for separate active asynchronous I/O devices. Active I/O devices that interact with real-time applications do so at their own rate. Each hardware device that uses interrupts to communicate with an application and whose operation is asynchronous with respect to other I/O devices should be considered to have their own separate tasks.
Recommendation 2: Combine tasks for I/O devices that generate infrequent interrupts having long deadlines. In the initial design, each active I/O device can have a separate task assigned to handle processing. Sometimes, however, combining the processing of two I/O devices into a single task makes sense. For example, if two I/O devices generate aperiodic or asynchronous interrupts infrequently and have relatively long deadlines, a single task might suffice.
Recommendation 3: Assign separate tasks to devices that have different input and output rates. Generally speaking, a task that handles a device with a high I/O frequency should have a higher task priority than a task that handles a device with a lower frequency. Higher I/O frequency implies shorter, allowable processing time. However, the importance of the I/O operation, and the consequences of delayed I/O, should be taken into account when assigning task priorities with respect to I/O frequency.
Recommendation 4: Assign higher priorities to tasks associated with interrupt-generating devices. A task that needs to interface with a particular I/O device must be set to a high-enough priority level so that the task can keep up with the device. This requirement exists because the task's execution speed is usually constrained by the speed of the interrupts that an associated I/O device generates and not necessarily the processor on which the application is running.
For I/O devices that generate periodic interrupts, the interrupt period dictates how long a task must handle processing. If the period is very short, tasks associated with these devices need to be set at high priorities.
For I/O devices that generate aperiodic interrupts, it can be difficult to predict how long an associated task will have to process the request before the next interrupt comes in. In some cases, interrupts can occur rapidly. In other cases, however, the interrupts can occur with longer time intervals between them. A rule of thumb is that these types of tasks need their priorities set high to ensure that all interrupt requests can be handled, including ones that occur within short time intervals. If an associated task's priority is set too low, the task might not be able to execute fast enough to meet the hardware device's needs.
Recommendation 5: Assign a resource control task for controlling access to I/O devices. Sometimes multiple tasks need to access a single hardware I/O device. In this case, the device can only serve one task at a time; otherwise, data may be lost or corrupted. An efficient approach is to assign a resource control task to that device (also known as a resource monitor task). This task can be used to receive multiple I/O requests from different tasks, so that the resource control task can send the I/O requests in a controlled and sequential manner to the I/O device.
This resource control task is not limited to working with just one I/O device. In some cases, one resource task can handle multiple requests that might need to be dispatched to one or more I/O devices.
Recommendation 6: Assign an event dispatch task for I/O device requests that need to be handed off to multiple tasks. Events or requests that come from an I/O device can be propagated across multiple tasks. A single task assigned as an event dispatch task can receive all requests from I/O devices and can dispatch them to the appropriate tasks accordingly.
Passive devices are different from active devices because passive devices do not generate interrupts. They sit passively until an application's task requests them to do something meaningful. Whether the request is for an input or an output, an application's task needs to initiate the event or data transfer sequence. The ways that tasks communicate with these devices is either by polling them in a periodic manner or by making a request whenever the task needs to perform input or output.
The diagram either for a passive I/O device acting as an input or an output to an application or for communicating with the application periodically or aperiodically is similar to the one illustrated in Figure 14.6.
Figure 14.6: General communication mechanisms for passive I/O devices.
Some typical tasks that can result from identifying a passive I/O device in a real-time application are listed in Table 14.2.
Table 14.2: Common tasks that interface with passive I/O devices.
Task Type | Description |
---|---|
Aperiodic Passive Device I/O Task | Assigned to passive I/O devices and issues requests to those devices on an as-needed basis. |
Periodic Passive Device I/O Task | Assigned to passive I/O devices and polls those devices in a periodic fashion. |
Resource Control Device I/O Task | Assigned for controlling the access to a shared hardware I/O device or a group of devices. |
Event Dispatch Device I/O Task | Assigned for dispatching events to other tasks from one or more I/O devices. |
Recommendation 1: Assign a single task to interface with passive I/O devices when communication with such devices is aperiodic and when deadlines are not urgent. Some applications need to communicate with a passive I/O device aperiodically. This device might be a sensor or display. If the deadlines are relatively long, these requests for one or more passive I/O devices can be handled with one task.
Recommendation 2: Assign separate polling tasks to send periodic requests to passive I/O devices. Commonly, a real-time application might need to sample a signal or some data repeatedly from a passive I/O device. This process can be done effectively in a periodic polling loop. In order to avoid over-sampling or under-sampling the data, assign a separate task to each passive I/O device that needs to be polled at different rates.
Recommendation 3: Trigger polling requests via timer events. More than one way exists to perform timing-based polling loops. One common mistake is using a time delay within the loop that is equal to the period of the sampling rate. This method can be problematic because the loop won't take exactly the same amount of time to execute each time through-the loop is subject to interrupts and preemption from higher priority tasks. A better process is to use a timer to trigger an event after every cycle. A more accurate periodic rate can be maintained using this approach.
Recommendation 4: Assign a high relative priority to polling tasks with relatively short periods. Tasks that are set up to poll passive I/O devices for inputs may do so at different rates. If the period is very short, less time is available to process incoming data before the next cycle. In this case, these tasks with faster polling loops need to be set with higher priorities. Designers, however, need to remember that this process must be done carefully, as heavy polling can use extra CPU cycles and result in increased overhead.
Events in a real-time application can propagate across multiple tasks. Whether an event is generated externally from an I/O device or internally from within the application, a need exists for creating a task or a group of tasks that can properly handle the event as it is propagated through the application. Externally generated events are discussed in the pervious sections, so the focus here is on internally generated events. Examples of events that can be generated internally to an application include when error conditions arise or faults are detected. An event in this case is generated and propagated outward to an I/O device or an internal corrective action is taken.
Before designing a real-time application, take time to understand and itemize each of the timing deadlines required for the application. After the timing deadlines have been identified, separate tasks can be assigned to handle the separate deadlines. Task priorities can be assigned based on the criticality or urgency of each deadline.
Note the difference between criticality and urgency. Critical tasks are tasks whose failure would be disastrous. The deadline might be long or short but must always be met, or else the system does not fulfill the specifications. An urgent task is a task whose timing deadline is relatively short. Meeting this deadline might or might not be critical. Both urgent and critical tasks are usually set to higher relative priorities.
Each rate-driven activity runs independently of any other rate. Periodic activities can be identified, and activities can be grouped into tasks with similar rates.
Real-time systems may contain sequences of code that always execute at the same time, although they are functionally unrelated. Such sequences exhibit temporal cohesion. Examples are different activities driven by the same external stimulus (i.e., a timer). Grouping such sequences into one task reduces system overhead.
Some activities in a real-time application require a lot of CPU time compared to the time required for other operations, such as performing I/O. These activities, known as computationally bound activities, can be number-crunching activities and typically have relatively long deadlines. These types of activities are usually set to lower relative priorities so that they do not monopolize the CPU. In some cases, these types of tasks can be time-sliced at a common priority level, where each gets time to execute when tasks that are more critical don't need to run.
Functional cohesion requires collecting groups of functions or sequences of code that perform closely related activities into a single task. In addition, if two tasks are closely coupled (pass lots of data between each other), they should also be considered for combination into one task. Grouping these closely related or closely coupled activities into a singe task can help eliminate synchronization and communication overhead.
Tasks can also be grouped according to the specific purposes they serve. One example of a task serving a clear purpose is a safety task. Detection of possible problems, setting alarms, and sending notifications to the user, as well as setting up and executing corrective measures, are just some examples that can be coordinated in a safety task or group of tasks. Other tasks can also exist in a real-time system that can serve a specific purpose.
Sequential cohesion groups activities that must occur in a given sequence into one task to further emphasize the requirement for sequential operation. A typical example is a sequence of computations that must be carried out in a predefined order. For example, the result of the first computation provides input to the next computation and so on.
After an embedded application has been decomposed into ISRs and tasks, the tasks must be scheduled to run in order to perform required system functionality. Schedulability analysis determines if all tasks can be scheduled to run and meet their deadlines based on the deployed scheduling algorithm while still achieving optimal processor utilization.
Note that schedulability analysis looks only at how systems meet temporal requirements, not functional requirements.
The commonly practiced analytical method for real-time systems is Rate Monotonic Analysis (RMA). Liu and Layland initially developed the mathematical model for RMA in 1973. (This book calls their RMA model the basic RMA because it has since been extended by later researchers.) The model is developed over a scheduling mechanism called Rate Monotonic Scheduling (RMS), which is the preemptive scheduling algorithm with rate monotonic priority assignment as the task priority assignment policy. Rate monotonic priority assignment is the method of assigning a task its priority as a monotonic function of the execution rate of that task. In other words, the shorter the period between each execution, the higher the priority assigned to a task.
A set of assumptions is associated with the basic RMA. These assumptions are that:
· all of the tasks are periodic,
· the tasks are independent of each other and that no interactions occur among tasks,
· a task's deadline is the beginning of its next period,
· each task has a constant execution time that does not vary over time,
· all of the tasks have the same level of criticality, and
· aperiodic tasks are limited to initialization and failure recovery work and that these aperiodic tasks do not have hard deadlines.
Equation 14.1 is used to perform the basic RMA schedulability test on a system.
Ci = worst-case execution time associated with periodic task I
Ti = period associated with task i
n = number of tasks
U(n) is the utilization factor. The right side of the equation is the theoretical processor utilization bound. If the processor utilization for a given set of tasks is less than the theoretical utilization bound, this set of tasks is schedulable. The value of U decreases as n increases and eventually converges to 69% when n becomes infinite.
Let's look at a sample problem and see how the formula is implemented. Table 14.3 summarizes the properties of three tasks that are scheduled using the RMS.
Table 14.3: Properties of tasks.
Periodic Task | Execution Time | Period (milliseconds) |
---|---|---|
Task 1 | 20 | 100 |
Task 2 | 30 | 150 |
Task 3 | 50 | 300 |
Using Equation 14.1, the processor utilization for this sample problem is calculated as follows
Total utilization for the sample problem is at 57%, which is below the theoretical bound of 77%. This system of three tasks is schedulable, i.e., every task can meet its deadline.
The basic RMA is limiting. The second assumption associated with basic RMA is impractical because tasks in real-time systems have inter-dependencies, and task synchronization methods are part of many real-time designs. Task synchronization, however, lies outside the scope of basic RMA.
Deploying inter-task synchronization methods implies some tasks in the system will experience blocking, which is the suspension of task execution because of resource contention. Therefore, the basic RMA is extended to account for task synchronization. Equation 14.2 provides the equation for the extended RMA schedulability test.
where:
Ci = worst case execution time associated with periodic task I
Ti = period associated with task i
Bi = the longest duration of blocking that can be experienced by I
n = number of tasks
This equation is best demonstrated with an example. This example uses the same three tasks provided in Table 14.3 and inserts two shared resources, as shown in Figure 14.7. In this case, the two resources represent a shared memory (resource #1) and an I/O bus (resource #2).
Figure 14.7: Example setup for extended RMA.
Task #1 makes use of resource #2 for 15ms at a rate of once every 100ms. Task #2 is a little more complex. It is the only task that uses both resources. Resource #1 is used for 5ms, and resource #2 is used for 10ms. Task #2 must run at a rate of once every 150ms.
Task #3 has the lowest frequency of the tasks and runs once every 300ms. Task #3 also uses resource #2 for 18ms.
Now looking at schedulability, Equation 14.2 yields three separate equations that must be verified against a utility bound. Let's take a closer look at the first equation
Either task #2 or task #3 can block task #1 by using resource #2. The blocking factor B1 is the greater of the times task #2 or task #3 holds the resource, which is 18ms, from task #3. Applying the numbers to Equation 14.2, the result is below the utility bound of 100% for task #1. Hence, task #1 is schedulable.
Looking at the second equation, task #2 can be blocked by task #3. The blocking factor B2 is 18ms, which is the time task #3 has control of resource #2, as shown
Task #2 is also schedulable as the result is below the utility bound for two tasks. Now looking at the last equation, note that Bn is always equal to 0. The blocking factor for the lowest level task is always 0, as no other tasks can block it (they all preempt it if they need to), as shown
Again, the result is below the utility bound for the three tasks, and, therefore, all tasks are schedulable.
Other extensions are made to basic RMA for dealing with the rest of the assumptions associated with basic RMA, such as accounting for aperiodic tasks in real-time systems. Consult the listed references for additional readings on RMA and related materials.
Some points to remember include the following:
· An outside-in approach can be used to decompose applications at the top level.
· Device dependencies can be used to decompose applications.
· Event dependencies can be used to decompose applications.
· Timing dependencies can be used to decompose applications.
· Levels of criticality of workload involved can be used to decompose applications.
· Functional cohesion, temporal cohesion, or sequential cohesion can be used either to form a task or to combine tasks.
· Rate Monotonic Scheduling can be summarized by stating that a task's priority depends on its period-the shorter the period, the higher the priority. RMS, when implemented appropriately, produces stable and predictable performance.
· Schedulability analysis only looks at how systems meet temporal requirements, not functional requirements.
· Six assumptions are associated with the basic RMA:
○ all of the tasks are periodic,
○ the tasks are independent of each other and that no interactions occur among tasks,
○ a task's deadline is the beginning of its next period,
○ each task has a constant execution time that does not vary over time,
○ all of the tasks have the same level of criticality, and
○ aperiodic tasks are limited to initialization and failure recovery work and that these aperiodic tasks do not have hard deadlines.
· Basic RMA does not account for task synchronization and aperiodic tasks.