Inside TOPPERS/ASP

Chapter 02. TOPPERS/ASP kernel specifation

After review of the ITRON specification and the TOPPERS/ASP kernel last chapter, we will see the ASP kernel specification more closely this chapter, including the ex-JSP part -- task scheduling rules, processing units, delay dispatch, configuration procedure, etc. and the newly added part-- the interrupt priority mask, the priority-based data queue functionality, the interrupt processing model and interrupts management. (We will explain the interrupt processing model later on. )
First of all we give a explanation of some basic terms, and then get a bird's eye view of task states. You can skip this part if you have know them very well.
Glossaries

a. Objects
The resources on which a kernel operates are referred to as objects, including tasks, semaphores, eventflags, etc..
Each object is assigned with a unique ID within its kind by the configurator (we will introduce it later). The kernel controls objects through their IDs. Generally an ID will be specified as a parameter of a service-call. For example, when we want to create a task, we call the correspondent service-call with its parameter specified as the ID of the target task.

b. Processing Units
It refers to a object that hold its correspondent program, or that correspondent program itself.
In the case of a task or an interrupt handler, its correspondent function is prepared by the application side, while called by the kernel.

c. Tasks
The term "task" refers to a unit of concurrent processing, controlled by the kernel. While program statements inside a single task are executed sequentially, statements of different tasks are executed concurrently. Multiple tasks are executed concurrently when seen from an application's point of view. However, the tasks do not actually run in parallel but rather, they are executed one by one under the control of the kernel, using time-sharing techniques.

[Invoking Task]
The task that invokes a service call is called the "invoking task". Seeing from the kernel side, it is the currently executing task.

d. Dispatching (Task Dispatching)
The act of switching the currently executing task on a processor with another, non-executing task, is called "dispatching" (or "task dispatching"). The mechanism in the kernel that performs dispatching is called the "dispatcher" (or the "task dispatcher").

e. Scheduling (Task Scheduling)
The process that determins which task is to be executed next is called "scheduling" (or "task scheduling"). The mechanism in the kernel that executes scheduling is called the "scheduler" (or the "task scheduler").

Note: Relationship between Scheduler and Dispatcher
The scheduler decides the to-be-executed-next program according to predetermined precedence. When there occurs program with higher precedence than currently-running one, it instructs the dispatcher to do the switching.

f. Contexts
The environment in which a program executes is generally called the program's "context". It often refers to contents of the associated CPU registers.

g. Precedence
The criterion used to determine the order of program execution is called "precedence".
In principle, when a higher precedence program becomes executable, it will begin executing in place of the currently executing lower precedence program.

h. Priorities
Priorities are parameters determined by applications to control the processing order of tasks, messages, ans so on. In the ASP kernel, positive serial numbers starting from 1 to 16 are used to represent priorities. A smaller number indicates a higher priority, which is based upon the ITRON specification.

[Priorities and precedence]
A "priority" is a parameter given by an application, while "precedence" is used to clarify the order of program execution.
For example, in a situation where there are several tasks with the same priority, it is "precedence" that determines their execution order.

i. Interrupts (External Interrupts)
It refers to exception handlers invoked by an extern event independent of the instruction that is being currently executed by the processor.

j. Interrupt Mask (Disabling)
The act of blocking the route that transfers interrupt requests from peripheral devices to the processor is called "interrupt mask".

k.CPU Exceptions
It refers to exception handlers dependent on the instruction that is being currently executed by the processor.

Task States

Task states are shown as Figure 2-1.


Figure 2.1 task states transition diagram

(a) RUNNING state
It is referred to as a state when the task is being executed by the CPU.

(b) READY state
When a task is in the READY state, it is ready to execute but it cannot, because a task with higher precedence is already executing.

Here's one note.

Figure 2-2 dispatching and preemption

When a task in READY holds higher precedence than the currently-executing one, "dispatching" will happen, resulting in the former task shifting into RUNNING.
To that previously executing task, it is "preempted" by the newly executing one.

(c) BLOCKED state
It refers to such a state that requirements for executing have not been met yet. The task is still waiting for a certain trigger to get executing. Necessary information such as contents of the CPU registers is kept until the task gets RUNNING.
The BLOCKED state is divided into 3 sub-states:
(c1) WAITING state
It refers to a situation where a task pauses its execution itself until the desired requirements get met.
(c2)SUSPENDED state
It represents such a condition that the task is forced into WAITING by another task.
(c3)WAITING-SUSPENDED state
A state overlapped with the above two.

(d) DORMANT state
Either when a task has not been started or when it has ended. Once getting into DORMANT, the task will lose all its information on execution. Execution of a task out of DORMANT will be from the task entry address.

(e) UN-REGISTERED state
A virtual state used to represent those having-not-been-created tasks,  or having-not-been-registered-to-the-system ones.
a. Preemptive Priority-based Task Scheduling

If there is a higher-priority task getting READY, task switch will be conducted despite ongoing execution of a lower-priority one.

b.FCFS (First Come First Served) Scheduling

If there are a number of tasks with the same priority, one having got READY first will execute prior to others.

- Aside from statical priority assignments, chg_pri, a function used to change priority dynamically, is also provided.

We will use the following two cases for further explanation.
1) a situation where a task is preempted
2) a situation where a task gets WAITING

1. A situation where a task is preempted

Tasks are activated in the following order: task A with Priority 1, task E with Priority 3, and then task B,C,D with Priority 2. Their consequent precedence distribution is illustrated in Figure 2-3.
For convenience's sake, we highlight the RUNNING task and the READY ones using color.


Figure 2-3 initial precedence relationship


Task A is terminated.


Next, task B with higher priority shifts to RUNNING. (Figure 2-4)


Figure 2-4 precedence relationship after task A's termination


Task A is started again.

At this time, task B is preempted and goes back to READY, which is applied with the Preemptive Priority-based Task Scheduling. Meanwhile, as task B hit RUNNING earlier than either C or D, tasks with Priority 2 keep their precedence relationship unchanged, which is applied with the FCFS (First Come First Served) Scheduling. (Figure 2-5)


Figure 2-5 precedence relationship after start of task A

2. A situation where a task gets WAITING


Figure 2-6 initial precedence relationship

Task B into WAITING

As task precedence is a concept used among RUNNABLE tasks, precedence relationship at this point looks like what Figure 2-7 is showing :

Figure 2-7 precedence relationship after task B shifting to WAITING


Task B's release from WAITING

Since task B got READY after C and D, it holds the lowest precedence among the tasks with Priority 2, which is known as the FCFS Scheduling. (Figure 2-8)


Figure 2-8 precedence relationship after task B's release from WAITING
A processing unit refers to a kernel-controlled object that holds correspondent program.

There are the following kinds of processing units:
- Task
- Interrupt Handler
Interrupt Service Routine (ISR)
Time Event Handler (Cycle Handler, Alarm Handler)
- CPU Exception Handler
- Initialization Routine, Finalization Routine

Precedence among processing units and the dispatcher is listed as below from high to low:
- Interrupt Handler
- Dispatcher
- Task
(Here CPU Exception Handler is omitted.)

Since an interrupt handler holds higher precedence than the dispatcher, task switch won't happen during execution of the interrupt handler, which is known as "Delay Dispatch".
We use Figure 2-9 to explain that.


Figure 2-9 delay dispatch

Suppose interrupt handler A starts during execution of task A, and what's more, interrupt handler B with higher priority starts (Nested Interrupts). Within interrupt handler B, a service call that turns task B -- holding higher priority than task A-- into RUNNABLE is issued. It seems that the dispatcher should stand out to put task B into RUNNABLE..However, as an interrupt handler's precedence is higher than that of the dispatcher, boot of the dispatcher will be delayed until all activated interrupt handlers terminate. Once the dispatcher starts, task B gets RUNNING. That is "Delay Dispatch".
4-1 Contexts
There are two types of contexts: task contexts and non-task contexts.
Contexts that can be regarded as a part of a task are generically called task contexts, while other contexts are generically called non-task contexts. Non-task contexts include contexts in which interrupt handlers and CPU exception handlers execute.
Differences between these two types are in their implementation and specification.
- As the implementation, each task holds its own stack as part of its contexts, while non-task contexts share only one stack.
- As the specification, it is possible to get into WAITING in task contexts, while impossible in non-task contexts. (Non-contexts hold no states.) A processing unit in non-task contexts, such as an interrupt handler, always tries to execute till its end once started. Therefore, any service call than generates a WAITING state must not be called in non-task contexts. That means service calls that can be called are different in two types of contexts.

4-2 System States

a. CPU-locked State
All interrupts, except ones out of controls of the kernel, are disabled, and what's more, task dispatch does not happen.

b. Dispatch-disabled State
Task dispatch does not occur, while interrupts are allowed.

c. Interrupt Priority Mask
It is newly introduced in the ASP kernel , and based upon the TOPPERS Standard Interrupt Processing Model, to be described later on.
The OS holds the variable that represents the interrupt priority mask. Only interrupts with higher priority than that variable can be accepted.
A state when the interrupt priority mask is 0 is called the Interrupt Priority Mask All-cleared State, during which no interrupt is masked. A state when the interrupt priority mask is not 0 refuses at least one interrupt, and it is called the Interrupt Priority Mask Not-call-cleared State. A task's act of setting a certain interrupt priority mask indicates raising its priority to the correspondent interrupt priority. (Figure 2-10) That is, disabling some interrupts leads to non-occurrence of dispatch whose priority is lower than that of any interrupt.

Figure 2-10 States of Interrupt Priority Mask 2

c. Dispatch Suspended State
A state in which dispatch does not happen for some reason that will be at least one the following 4 cases:
- in the non-task contexts
- during the CPU-locked state
- during the interrupt-priority-mask-not-all-cleared state
- in the dispatch-disabled states
A service call's name is basically in the form of xxx_yyy. xxx indicates operation while yyy represents the target, just like act_tsk. Besides this, name s like zxxx_yyy exist too. For example, service calls that can be called out of the non-task contexts have i as the prefix, which is used to discriminate themselves to ones that are called in the task contexts. (Such as iact_tsk.)

(Examples)
rcv_dtq: receives from a data queue.
prcv_dtq: tries to receive from a data queue but does not hit WAITING even if no data in that queue.
trcv_dtq: tries to receive from a data queue and goes into WAITING within a specified time if no data in that queue.

Statical APIs are written in the system configuration file for determining creation information and initial states of objects. Statical APIs can be looked at as a uppercase version of service calls in their names, sharing same functionalities and parameters, which makes it be easy to memorize both types.

Examples of statical APIs
CRE_TSK(ID tskid,{ATR tskatr,VP_INT exinf,FP task,PRI itskpri,SIZE stksz,VP stk}});

The service call correspondent to CRE_TSK is:
ER ercd = cre_tsk(ID tskid,T_CTSK *pk_ctsk);

They behaves a little differently as for how their parameters are specified: users can pass a pointer of a structure holding all kines of attributes to service calls, while users should specify these attributes one by one to statical APIs for no structure entities exist in memory when statical APIs are issued.
The configurator dominates the configuration.
Users write statical APIs in the system configuration file for the purpose of generating objects. Then the configurator parses that file and then generates a kernel construction&initialization file and its header automatically.

In the kernel construction&initialization file task initialization blocks, to be described later on in more detail, are defined, while in that header file macros for object IDs, automatically assigned, are generated. By including this header file, users can issue a service call with a macro as its parameter, rather than having to use an ID number.
(For example act_tsk(TASK1))
The ASP kernel provides the following functionalities:
- Task management
- Task synchronization
- Task exception processing
- Synchronization and communication
- Semaphore, eventfalg, mutex
- Data queue, priority-based data queue, mailboxes
- Memory-pool management
- Fixed-length memory-pool
- Time management
- System time management, cycle handler, alarm handler
- System states management
- Interrupts management
- CPU exception management
- System construction management

Here, we especially pay attention to mechanism of Priority-based Data Queue and Interrupt Processing Model, which are newly introduced into the ASP kernel.

<Priority-based Data Queue>
Data in the priority-based data queue are assigned with priorities and re-arranged in order of priority.
So, why such kind of queue is needed?
As we know, the kernel has provided mail-boxes, used to send messages whose sizes are inconstant. Since data to be sent are inconstant, the sending side must prepare a memory area to hold the data and then send the pointer pointed to that area, part of which is used by the kernel to construct a ring link.

So different from data queues, mail boxes provide a merit that they are never full (the sender never fall to WAITING). However, a risk exists: since the kernel is using the head part of memory allocated by the application side, an infinite loop will be caused in the kernel if the application makes a mistake of overwriting the memory that the kernel is using. Such a situation is an fiasco to the Protection Extension of μITRON4.0, so the Protection Extension modified the specification of mail boxes: the kernel does not use the application-allocated memory any longer but allocates that area itself. However, this modification is incompatible with the old specification of mail boxes in causing the sender to fall to WAITING if the mail box is full. In other worlds, the behavior of snd_mbx ( a service call used to send a message to a mail box) changes.

As we plan to write a kernel with memory protection functionality in a basis of the kernel, the specification of mail boxes are left as it was, and a new functionality of priority-based sending should be added. In this background, we add the specification of priority-based data queue.

<Interrupt Processing Model and Interrupt Management>
Once an interrupt happens, a correspondent Interrupt Service Routine (ISR) , registered by the application, or a correspondent Interrupt Hander will be called through the entry/exit routine within the kernel. Both of ISRs and interrupt handlers hold higher precedence than tasks. In general, the application prepares ISRs which are independent of the processor's interrupt architecture. Programmers can write interrupt program without knowing details of processor architecture. For example, a programmer can do interrupt-related setting without knowing which register determines level trigger or edge trigger. In some special situations, interrupt handlers can be used if ISRs hold no good.


APIs for Interrupt Management
DEF_INH : registers an interrupt hander.
ATT_ISR : registers an ISR.
CFG_INT : sets attributes of an interrupt request line.
->specifies a priority, whether level trigger or edge trigger, whether masking interrupts initially or not, and so on.
DEF_ICS : determines use of the non-task contexts stack.
dis_int : disables interrupts.
ena_int : enables interrupts.
chg_ipm : changes the interrupt priority mask.
get_ipm : gets the interrupt priority mask.

The Interrupt Processing Model will be explained later on.

We will describe implementation policies in more detail next chapter.