diff --git a/doc/user/emulation/events.md b/doc/user/emulation/events.md index 11da780..e1ee7ae 100644 --- a/doc/user/emulation/events.md +++ b/doc/user/emulation/events.md @@ -1,335 +1,568 @@ # Emulator events -This file contains an exhaustive list of events supported by the emulator. +This is a exhaustive list of the events recognized by the emulator. +Built on Jan 29 2024. -- Punctual events don't produce a state transition. -- All events refer to the current thread. -- Descriptions must be kept short. +## Model nanos6 -```txt -********************************************************** -Please keep this list synchronized with the emulator code! -********************************************************** +List of events for the model *nanos6* with identifier **`6`** at version `1.0.0`: +
+
6Yc+(u32 typeid, str label)
+
creates task type %{typeid} with label "%{label}"
+
6Tc(u32 taskid, u32 typeid)
+
creates task %{taskid} with type %{typeid}
+
6Tx(u32 taskid)
+
executes the task %{taskid}
+
6Te(u32 taskid)
+
ends the task %{taskid}
+
6Tp(u32 taskid)
+
pauses the task %{taskid}
+
6Tr(u32 taskid)
+
resumes the task %{taskid}
+
6W[
+
enters worker main loop, looking for tasks
+
6W]
+
leaves worker main loop, looking for tasks
+
6Wt
+
begins handling a task via handleTask()
+
6WT
+
ceases handling a task via handleTask()
+
6Ww
+
begins switching to another worker via switchTo()
+
6WW
+
ceases switching to another worker via switchTo()
+
6Wm
+
begins migrating the current worker to another CPU
+
6WM
+
ceases migrating the current worker to another CPU
+
6Ws
+
begins suspending the worker via suspend()
+
6WS
+
ceases suspending the worker via suspend()
+
6Wr
+
begins resuming another worker via resume()
+
6WR
+
ceases resuming another worker via resume()
+
6Wg
+
enters sponge mode (absorbing system noise)
+
6WG
+
leaves sponge mode (absorbing system noise)
+
6W*
+
signals another worker to wake up
+
6Pp
+
sets progress state to Progressing
+
6Pr
+
sets progress state to Resting
+
6Pa
+
sets progress state to Absorbing
+
6C[
+
begins creating a new task
+
6C]
+
ceases creating a new task
+
6U[
+
begins submitting a task via submitTask()
+
6U]
+
ceases submitting a task via submitTask()
+
6F[
+
begins spawning a function via spawnFunction()
+
6F]
+
ceases spawning a function via spawnFunction()
+
6t[
+
enters the task body
+
6t]
+
leaves the task body
+
6O[
+
begins running the task body as taskfor collaborator
+
6O]
+
ceases running the task body as taskfor collaborator
+
6Ma
+
starts allocating memory
+
6MA
+
stops allocating memory
+
6Mf
+
starts freeing memory
+
6MF
+
stops freeing memory
+
6Dr
+
begins registration of task dependencies
+
6DR
+
ceases registration of task dependencies
+
6Du
+
begins unregistration of task dependencies
+
6DU
+
ceases unregistration of task dependencies
+
6S[
+
begins scheduler serving mode
+
6S]
+
ceases scheduler serving mode
+
6Sa
+
begins submitting a ready task via addReadyTask()
+
6SA
+
ceases submitting a ready task via addReadyTask()
+
6Sp
+
begins processing ready tasks via processReadyTasks()
+
6SP
+
ceases processing ready tasks via processReadyTasks()
+
6S@
+
self assigns itself a task
+
6Sr
+
receives a task from another thread
+
6Ss
+
sends a task to another thread
+
6Bb
+
begins blocking the current task
+
6BB
+
ceases blocking the current task
+
6Bu
+
begins unblocking a task
+
6BU
+
ceases unblocking a task
+
6Bw
+
enters a task wait
+
6BW
+
leaves a task wait
+
6Bf
+
enters a wait for
+
6BF
+
leaves a wait for
+
6He
+
begins execution as external thread
+
6HE
+
ceases execution as external thread
+
6Hw
+
begins execution as worker
+
6HW
+
ceases execution as worker
+
6Hl
+
begins execution as leader
+
6HL
+
ceases execution as leader
+
6Hm
+
begins execution as main thread
+
6HM
+
ceases execution as main thread
+
-MCV = Model Category Value +## Model nodes ------------------------------------------------------------- -MCV Description ------------------- Ovni 1.0.0 (model=O) -------------------- +List of events for the model *nodes* with identifier **`D`** at version `1.0.0`: +
+
DR[
+
begins registering task accesses
+
DR]
+
ceases registering task accesses
+
DU[
+
begins unregistering task accesses
+
DU]
+
ceases unregistering task accesses
+
DW[
+
enters a blocking condition (waiting for an If0 task)
+
DW]
+
leaves a blocking condition (waiting for an If0 task)
+
DI[
+
begins the inline execution of an If0 task
+
DI]
+
ceases the inline execution of an If0 task
+
DT[
+
enters a taskwait
+
DT]
+
leaves a taskwait
+
DC[
+
begins creating a task
+
DC]
+
ceases creating a task
+
DS[
+
begins submitting a task
+
DS]
+
ceases submitting a task
+
DP[
+
begins spawning a function
+
DP]
+
ceases spawning a function
+
-OHC Creates a new thread (punctual event) -OHx Begins the execution -OHp Pauses the execution -OHc Enters the cooling state (about to be paused) -OHw Enters the warming state (about to be running) -OHe Ends the execution +## Model kernel -OAs Switches it's own affinity to the given CPU -OAr Remotely switches the affinity of the given thread +List of events for the model *kernel* with identifier **`K`** at version `1.0.0`: +
+
KO[
+
out of CPU
+
KO]
+
back to CPU
+
-OB. Emits a burst event to measure latency +## Model mpi -OU[ Enters a region which contain past events (HACK) -OU] Exits the region of past events (HACK) +List of events for the model *mpi* with identifier **`M`** at version `1.0.0`: +
+
MUf
+
enters MPI_Finalize()
+
MUF
+
leaves MPI_Finalize()
+
MUi
+
enters MPI_Init()
+
MUI
+
leaves MPI_Init()
+
MUt
+
enters MPI_Init_thread()
+
MUT
+
leaves MPI_Init_thread()
+
MW[
+
enters MPI_Wait()
+
MW]
+
leaves MPI_Wait()
+
MWa
+
enters MPI_Waitall()
+
MWA
+
leaves MPI_Waitall()
+
MWs
+
enters MPI_Waitsome()
+
MWS
+
leaves MPI_Waitsome()
+
MWy
+
enters MPI_Waitany()
+
MWY
+
leaves MPI_Waitany()
+
MT[
+
enters MPI_Test()
+
MT]
+
leaves MPI_Test()
+
MTa
+
enters MPI_Testall()
+
MTA
+
leaves MPI_Testall()
+
MTy
+
enters MPI_Testany()
+
MTY
+
leaves MPI_Testany()
+
MTs
+
enters MPI_Testsome()
+
MTS
+
leaves MPI_Testsome()
+
MS[
+
enters MPI_Send()
+
MS]
+
leaves MPI_Send()
+
MSb
+
enters MPI_Bsend()
+
MSB
+
leaves MPI_Bsend()
+
MSr
+
enters MPI_Rsend()
+
MSR
+
leaves MPI_Rsend()
+
MSs
+
enters MPI_Ssend()
+
MSS
+
leaves MPI_Ssend()
+
MR[
+
enters MPI_Recv()
+
MR]
+
leaves MPI_Recv()
+
MRs
+
enters MPI_Sendrecv()
+
MRS
+
leaves MPI_Sendrecv()
+
MRo
+
enters MPI_Sendrecv_replace()
+
MRO
+
leaves MPI_Sendrecv_replace()
+
MAg
+
enters MPI_Allgather()
+
MAG
+
leaves MPI_Allgather()
+
MAr
+
enters MPI_Allreduce()
+
MAR
+
leaves MPI_Allreduce()
+
MAa
+
enters MPI_Alltoall()
+
MAA
+
leaves MPI_Alltoall()
+
MCb
+
enters MPI_Barrier()
+
MCB
+
leaves MPI_Barrier()
+
MCe
+
enters MPI_Exscan()
+
MCE
+
leaves MPI_Exscan()
+
MCs
+
enters MPI_Scan()
+
MCS
+
leaves MPI_Scan()
+
MDb
+
enters MPI_Bcast()
+
MDB
+
leaves MPI_Bcast()
+
MDg
+
enters MPI_Gather()
+
MDG
+
leaves MPI_Gather()
+
MDs
+
enters MPI_Scatter()
+
MDS
+
leaves MPI_Scatter()
+
ME[
+
enters MPI_Reduce()
+
ME]
+
leaves MPI_Reduce()
+
MEs
+
enters MPI_Reduce_scatter()
+
MES
+
leaves MPI_Reduce_scatter()
+
MEb
+
enters MPI_Reduce_scatter_block()
+
MEB
+
leaves MPI_Reduce_scatter_block()
+
Ms[
+
enters MPI_Isend()
+
Ms]
+
leaves MPI_Isend()
+
Msb
+
enters MPI_Ibsend()
+
MsB
+
leaves MPI_Ibsend()
+
Msr
+
enters MPI_Irsend()
+
MsR
+
leaves MPI_Irsend()
+
Mss
+
enters MPI_Issend()
+
MsS
+
leaves MPI_Issend()
+
Mr[
+
enters MPI_Irecv()
+
Mr]
+
leaves MPI_Irecv()
+
Mrs
+
enters MPI_Isendrecv()
+
MrS
+
leaves MPI_Isendrecv()
+
Mro
+
enters MPI_Isendrecv_replace()
+
MrO
+
leaves MPI_Isendrecv_replace()
+
Mag
+
enters MPI_Iallgather()
+
MaG
+
leaves MPI_Iallgather()
+
Mar
+
enters MPI_Iallreduce()
+
MaR
+
leaves MPI_Iallreduce()
+
Maa
+
enters MPI_Ialltoall()
+
MaA
+
leaves MPI_Ialltoall()
+
Mcb
+
enters MPI_Ibarrier()
+
McB
+
leaves MPI_Ibarrier()
+
Mce
+
enters MPI_Iexscan()
+
McE
+
leaves MPI_Iexscan()
+
Mcs
+
enters MPI_Iscan()
+
McS
+
leaves MPI_Iscan()
+
Mdb
+
enters MPI_Ibcast()
+
MdB
+
leaves MPI_Ibcast()
+
Mdg
+
enters MPI_Igather()
+
MdG
+
leaves MPI_Igather()
+
Mds
+
enters MPI_Iscatter()
+
MdS
+
leaves MPI_Iscatter()
+
Me[
+
enters MPI_Ireduce()
+
Me]
+
leaves MPI_Ireduce()
+
Mes
+
enters MPI_Ireduce_scatter()
+
MeS
+
leaves MPI_Ireduce_scatter()
+
Meb
+
enters MPI_Ireduce_scatter_block()
+
MeB
+
leaves MPI_Ireduce_scatter_block()
+
------------------ nOS-V 1.0.0 (model=V) ------------------- +## Model ovni -VTc Creates a new task (punctual event) -VTx Task execute (enter task body) -VTe Task end (exit task body) -VTp Task pause -VTr Task resume +List of events for the model *ovni* with identifier **`O`** at version `1.0.0`: +
+
OAr(i32 cpu, i32 tid)
+
changes the affinity of thread %{tid} to CPU %{cpu}
+
OAs(i32 cpu)
+
switches it's own affinity to the CPU %{cpu}
+
OB.
+
emits a burst event to measure latency
+
OHC(i32 cpu, u64 tag)
+
creates a new thread on CPU %{cpu} with tag %#llx{tag}
+
OHc
+
enters the Cooling state (about to be paused)
+
OHe
+
ends the execution
+
OHp
+
pauses the execution
+
OHr
+
resumes the execution
+
OHw
+
enters the Warming state (about to be running)
+
OHx(i32 cpu, i32 tid, u64 tag)
+
begins the execution on CPU %{cpu} created from %{tid} with tag %#llx{tag}
+
OCn(i32 cpu)
+
informs there are %{cpu} CPUs
+
OF[
+
begins flushing events to disk
+
OF]
+
ceases flushing events to disk
+
OU[
+
enters unordered event region
+
OU]
+
leaves unordered event region
+
-VYc Task type create (punctual event) +## Model tampi -VSr Receives a task from another thread (punctual event) -VSs Sends a task to another thread (punctual event) -VS@ Self-assigns itself a task (punctual event) -VSh Enters the hungry state, waiting for a task -VSf Is no longer hungry -VS[ Enters the scheduler server mode -VS] Ends the scheduler server mode +List of events for the model *tampi* with identifier **`T`** at version `1.0.0`: +
+
TCi
+
starts issuing a non-blocking communication operation
+
TCI
+
stops issuing a non-blocking communication operation
+
TGc
+
starts checking pending requests from the global array
+
TGC
+
stops checking pending requests from the global array
+
TLi
+
enters the library code at an API function
+
TLI
+
leaves the library code at an API function
+
TLp
+
enters the library code at a polling function
+
TLP
+
leaves the library code at a polling function
+
TQa
+
starts adding a ticket/requests to a queue
+
TQA
+
stops adding a ticket/requests to a queue
+
TQt
+
starts transferring tickets/requests from queues to global array
+
TQT
+
stops transferring tickets/requests from queues to global array
+
TRc
+
starts processsing a completed request
+
TRC
+
stops processsing a completed request
+
TRt
+
starts testing a single request with MPI_Test
+
TRT
+
stops testing a single request with MPI_Test
+
TRa
+
starts testing several requests with MPI_Testall
+
TRA
+
stops testing several requests with MPI_Testall
+
TRs
+
starts testing several requests with MPI_Testsome
+
TRS
+
stops testing several requests with MPI_Testsome
+
TTc
+
starts creating a ticket linked to a set of requests and a task
+
TTC
+
stops creating a ticket linked to a set of requests and a task
+
TTw
+
starts waiting for a ticket completion
+
TTW
+
stops waiting for a ticket completion
+
-VU[ Starts to submit a task -VU] Ends the submission of a task +## Model nosv -VMa Starts allocating memory -VMA Ends allocating memory -VMf Starts freeing memory -VMF Ends freeing memory - -VAr Enters nosv_create() -VAR Exits nosv_create() -VAd Enters nosv_destroy() -VAD Exits nosv_destroy() -VAs Enters nosv_submit() -VAS Exits nosv_submit() -VAp Enters nosv_pause() -VAP Exits nosv_pause() -VAy Enters nosv_yield() -VAY Exits nosv_yield() -VAw Enters nosv_waitfor() -VAW Exits nosv_waitfor() -VAc Enters nosv_schedpoint() -VAC Exits nosv_schedpoint() - -VHa Enters nosv_attach() -VHA Exits nosv_detach() -VHw Begins the execution as a worker -VHW Ends the execution as a worker -VHd Begins the execution as the delegate -VHD Ends the execution as the delegate - ------------------ NODES 1.0.0 (model=D) ------------------- - -DR[ Begins the registration of a task's accesses -DR] Ends the registration of a task's accesses - -DU[ Begins the unregistration of a task's accesses -DU] Ends the unregistration of a task's accesses - -DW[ Enters a blocking condition (waiting for an If0 task) -DW] Exits a blocking condition (waiting for an If0 task) - -DI[ Begins the inline execution of an If0 task -DI] Ends the inline execution of an If0 task - -DT[ Enters a taskwait -DT] Exits a taskwait - -DC[ Begins the creation of a task -DC] Ends the creation of a task - -DS[ Begins the submit of a task -DS] Ends the submit of a task - -DP[ Begins the spawn of a function -DP] Ends the spawn of a function - ------------------ Kernel 1.0.0 (model=K) ------------------- - -KCO Is out of the CPU due to a context switch -KCI Is back in the CPU due to a context switch - ------------------ Nanos6 1.0.0 (model=6) ------------------- - -6Tc Creates a new task -6Tx Task execute -6Te Task end -6Tp Task pause -6Tr Task resume - -6Yc Task type create (punctual event) - -6C[ Begins creating a new task -6C] Ends creating a new task - -6S[ Enters the scheduler serving mode -6S] Ends the scheduler serving mode -6Sa Begins to submit a ready task via addReadyTask() -6SA Ends submitting a ready task via addReadyTask() -6Sp Begins to process ready tasks via processReadyTasks() -6SP Ends processing ready taska via processReadyTasks() -6Sr Receives a task from another thread (punctual event) -6Ss Sends a task to another thread (punctual event) -6S@ Self-assigns itself a task (punctual event) - -6W[ Begins the worker body loop, looking for tasks -6W] Ends the worker body loop -6Wt Begins handling a task via handleTask() -6WT Ends handling a task via handleTask() -6Ww Begins switching to another worker via switchTo() -6WW Ends switching to another worker via switchTo() -6Wm Begins migrating the CPU via migrate() -6WM Ends migrating the CPU via migrate() -6Ws Begins suspending the worker via suspend() -6WS Ends suspending the worker via suspend() -6Wr Begins resuming another worker via resume() -6WR Ends resuming another worker via resume() -6Wg Enters the sponge mode -6WG Exits the sponge mode -6W* Signals another thread to wake up (punctual event) - -6Pp Set progress state to Progressing -6Pr Set progress state to Resting -6Pa Set progress state to Absorbing - -6U[ Starts to submit a task via submitTask() -6U] Ends the submission of a task via submitTask() - -6F[ Begins to spawn a function via spawnFunction() -6F] Ends spawning a function - -6t[ Begins running the task body -6t] Ends running the task body - -6O[ Begins running the task body as taskfor collaborator -6O] Ends running the task body as taskfor collaborator - -6Dr Begins the registration of a task's accesses -6DR Ends the registration of a task's accesses -6Du Begins the unregistration of a task's accesses -6DU Ends the unregistration of a task's accesses - -6Bb Begins to block the current task via blockCurrentTask() -6BB Ends blocking the current task via blockCurrentTask() -6Bu Begins to unblock a task -6BU Ends unblocking a task -6Bw Enters taskWait() -6BW Exits taskWait() -6Bf Enters taskFor() -6BF Exits taskFor() - -6He Sets itself as external thread -6HE Unsets itself as external thread -6Hw Sets itself as worker thread -6HW Unsets itself as worker thread -6Hl Sets itself as leader thread -6HL Unsets itself as leader thread -6Hm Sets itself as main thread -6HM Unsets itself as main thread - -6Ma Begins allocating memory -6MA Ends allocating memory -6Mf Begins freeing memory -6MF Ends freeing memory - ------------------ TAMPI 1.0.0 (model=T) ------------------- - -TCi Begins to issue a non-blocking communication operation -TCI Ends issuing a non-blocking communication operation - -TGc Begins to check pending requests from the global array -TGC Ends checking pending requests from the global array - -TLi Begins the library code at an API function -TLI Ends the library code at an API function -TLp Begins the library code at a polling function -TLP Ends the library code at a polling function - -TQa Begins to add a ticket/requests to a queue -TQA Ends adding a ticket/requests to a queue -TQt Begins to transfer tickets/requests from queues to global array -TQT Ends transfering tickets/requests from queues to global array - -TRc Begins to process a completed request -TRC Ends processing a completed request -TRt Begins to test a single request with MPI_Test -TRT Ends testing a single request with MPI_Test -TRa Begins to test several requests with MPI_Testall -TRA Ends testing several requests with MPI_Testall -TRs Begins to test several requests with MPI_Testsome -TRS Ends testing several requests with MPI_Testsome - -TTc Begins to create a ticket linked to a set of requests and a task -TTC Ends creating a ticket linked to a set of requests and a task -TTw Begins to wait a ticket completion -TTW Ends waiting a ticket completion - ------------------ MPI 1.0.0 (model=M) ---------------------- - -MUi Enters MPI_Init -MUI Exits MPI_Init -MUt Enters MPI_Init_thread -MUT Exits MPI_Init_thread -MUf Enters MPI_Finalize -MUF Exits MPI_Finalize - -MW[ Enters MPI_Wait -MW] Exits MPI_Wait -MWa Enters MPI_Waitall -MWA Exits MPI_Waitall -MWy Enters MPI_Waitany -MWY Exits MPI_Waitany -MWs Enters MPI_Waitsome -MWS Exits MPI_Waitsome - -MT[ Enters MPI_Test -MT] Exits MPI_Test -MTa Enters MPI_Testall -MTA Exits MPI_Testall -MTy Enters MPI_Testany -MTY Exits MPI_Testany -MTs Enters MPI_Testsome -MTS Exits MPI_Testsome - -MS[ Enters MPI_Send -MS] Exits MPI_Send -MSb Enters MPI_Bsend -MSB Exits MPI_Bsend -MSr Enters MPI_Rsend -MSR Exits MPI_Rsend -MSs Enters MPI_Ssend -MSS Exits MPI_Ssend -MR[ Enters MPI_Recv -MR] Exits MPI_Recv -MRs Enters MPI_Sendrecv -MRS Exits MPI_Sendrecv -MRo Enters MPI_Sendrecv_replace -MRO Exits MPI_Sendrecv_replace - -MAg Enters MPI_Allgather -MAG Exits MPI_Allgather -MAr Enters MPI_Allreduce -MAR Exits MPI_Allreduce -MAa Enters MPI_Alltoall -MAA Exits MPI_Alltoall -MCb Enters MPI_Barrier -MCB Exits MPI_Barrier -MCe Enters MPI_Exscan -MCE Exits MPI_Exscan -MCs Enters MPI_Scan -MCS Exits MPI_Scan -MDb Enters MPI_Bcast -MDB Exits MPI_Bcast -MDg Enters MPI_Gather -MDG Exits MPI_Gather -MDs Enters MPI_Scatter -MDS Exits MPI_Scatter -ME[ Enters MPI_Reduce -ME] Exits MPI_Reduce -MEs Enters MPI_Reduce_scatter -MES Exits MPI_Reduce_scatter -MEb Enters MPI_Reduce_scatter_block -MEB Exits MPI_Reduce_scatter_block - -Ms[ Enters MPI_Isend -Ms] Exits MPI_Isend -Msb Enters MPI_Ibsend -MsB Exits MPI_Ibsend -Msr Enters MPI_Irsend -MsR Exits MPI_Irsend -Mss Enters MPI_Issend -MsS Exits MPI_Issend -Mr[ Enters MPI_Irecv -Mr] Exits MPI_Irecv -Mrs Enters MPI_Isendrecv -MrS Exits MPI_Isendrecv -Mro Enters MPI_Isendrecv_replace -MrO Exits MPI_Isendrecv_replace - -Mag Enters MPI_Iallgather -MaG Exits MPI_Iallgather -Mar Enters MPI_Iallreduce -MaR Exits MPI_Iallreduce -Maa Enters MPI_Ialltoall -MaA Exits MPI_Ialltoall -Mcb Enters MPI_Ibarrier -McB Exits MPI_Ibarrier -Mce Enters MPI_Iexscan -McE Exits MPI_Iexscan -Mcs Enters MPI_Iscan -McS Exits MPI_Iscan -Mdb Enters MPI_Ibcast -MdB Exits MPI_Ibcast -Mdg Enters MPI_Igather -MdG Exits MPI_Igather -Mds Enters MPI_Iscatter -MdS Exits MPI_Iscatter -Me[ Enters MPI_Ireduce -Me] Exits MPI_Ireduce -Mes Enters MPI_Ireduce_scatter -MeS Exits MPI_Ireduce_scatter -Meb Enters MPI_Ireduce_scatter_block -MeB Exits MPI_Ireduce_scatter_block -``` +List of events for the model *nosv* with identifier **`V`** at version `1.0.0`: +
+
VTc(u32 taskid, u32 typeid)
+
creates task %{taskid} with type %{typeid}
+
VTx(u32 taskid)
+
executes the task %{taskid}
+
VTe(u32 taskid)
+
ends the task %{taskid}
+
VTp(u32 taskid)
+
pauses the task %{taskid}
+
VTr(u32 taskid)
+
resumes the task %{taskid}
+
VYc+(u32 typeid, str label)
+
creates task type %{typeid} with label "%{label}"
+
VSr
+
receives a task from another thread
+
VSs
+
sends a task to another thread
+
VS@
+
self assigns itself a task
+
VSh
+
enters the hungry state, waiting for work
+
VSf
+
is no longer hungry
+
VS[
+
enters scheduler server mode
+
VS]
+
leaves scheduler server mode
+
VU[
+
starts submitting a task
+
VU]
+
stops submitting a task
+
VMa
+
starts allocating memory
+
VMA
+
stops allocating memory
+
VMf
+
starts freeing memory
+
VMF
+
stops freeing memory
+
VAr
+
enters nosv_create()
+
VAR
+
leaves nosv_create()
+
VAd
+
enters nosv_destroy()
+
VAD
+
leaves nosv_destroy()
+
VAs
+
enters nosv_submit()
+
VAS
+
leaves nosv_submit()
+
VAp
+
enters nosv_pause()
+
VAP
+
leaves nosv_pause()
+
VAy
+
enters nosv_yield()
+
VAY
+
leaves nosv_yield()
+
VAw
+
enters nosv_waitfor()
+
VAW
+
leaves nosv_waitfor()
+
VAc
+
enters nosv_schedpoint()
+
VAC
+
leaves nosv_schedpoint()
+
VHa
+
enters nosv_attach()
+
VHA
+
leaves nosv_dettach()
+
VHw
+
begins execution as worker
+
VHW
+
ceases execution as worker
+
VHd
+
begins execution as delegate
+
VHD
+
ceases execution as delegate
+